NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Gain optimization with non-linear controls
NASA Technical Reports Server (NTRS)
Slater, G. L.; Kandadai, R. D.
1984-01-01
An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.
NASA Astrophysics Data System (ADS)
Chavarette, Fábio Roberto; Balthazar, José Manoel; Felix, Jorge L. P.; Rafikov, Marat
2009-05-01
This paper analyzes the non-linear dynamics, with a chaotic behavior of a particular micro-electro-mechanical system. We used a technique of the optimal linear control for reducing the irregular (chaotic) oscillatory movement of the non-linear systems to a periodic orbit. We use the mathematical model of a (MEMS) proposed by Luo and Wang.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
Analysis of the faster-than-Nyquist optimal linear multicarrier system
NASA Astrophysics Data System (ADS)
Marquet, Alexandre; Siclet, Cyrille; Roque, Damien
2017-02-01
Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"
Polynomial elimination theory and non-linear stability analysis for the Euler equations
NASA Technical Reports Server (NTRS)
Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.
1986-01-01
Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.
Optimal non-linear health insurance.
Blomqvist, A
1997-06-01
Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method
NASA Astrophysics Data System (ADS)
Vasant, Pandian
2011-06-01
Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.
A linear model fails to predict orientation selectivity of cells in the cat visual cortex.
Volgushev, M; Vidyasagar, T R; Pei, X
1996-01-01
1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828
Sparse 4D TomoSAR imaging in the presence of non-linear deformation
NASA Astrophysics Data System (ADS)
Khwaja, Ahmed Shaharyar; ćetin, Müjdat
2018-04-01
In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.
NASA Astrophysics Data System (ADS)
Vasant, Pandian; Barsoum, Nader
2008-10-01
Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.
Optimizing Requirements Decisions with KEYS
NASA Technical Reports Server (NTRS)
Jalali, Omid; Menzies, Tim; Feather, Martin
2008-01-01
Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
NASA Astrophysics Data System (ADS)
Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen
2016-07-01
Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.
NASA Astrophysics Data System (ADS)
Hijas, K. M.; Madan Kumar, S.; Byrappa, K.; Geethakrishnan, T.; Jeyaram, S.; Nagalakshmi, R.
2018-03-01
Single crystals of 2-methoxy-4(phenyliminomethyl)phenol were grown from ethanol by slow evaporation solution growth technique. Single crystal X-ray diffraction experiment reveals the crystallization in orthorhombic system having non-centrosymmetric space group C2221. Geometrical optimization by density functional theory method was carried out using Gaussian program and compared with experimental results. Detailed experimental and theoretical vibrational analyses were carried out and the results were correlated to find close agreement. Thermal analyses show the material is thermally stable with a melting point of 159 °C. Natural bond orbital analysis was carried out to explain charge transfer interactions through hydrogen bonding. Relatively smaller HOMO-LUMO band gap favors the non linear optical activity of the molecule. Natural population analysis and molecular electrostatic potential calculations visualize the charge distribution in an isolated molecule. Calculated first-order molecular hyperpolarizability and preliminary second harmonic generation test carried out using Kurtz-Perry technique establish 2-methoxy-4(phenyliminomethyl)phenol crystal as a good non linear optical material. Z-scan proposes the material for reverse saturable absorption.
NASA Astrophysics Data System (ADS)
Wu, Dongjun
Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.
Building Energy Modeling and Control Methods for Optimization and Renewables Integration
NASA Astrophysics Data System (ADS)
Burger, Eric M.
This dissertation presents techniques for the numerical modeling and control of building systems, with an emphasis on thermostatically controlled loads. The primary objective of this work is to address technical challenges related to the management of energy use in commercial and residential buildings. This work is motivated by the need to enhance the performance of building systems and by the potential for aggregated loads to perform load following and regulation ancillary services, thereby enabling the further adoption of intermittent renewable energy generation technologies. To increase the generalizability of the techniques, an emphasis is placed on recursive and adaptive methods which minimize the need for customization to specific buildings and applications. The techniques presented in this dissertation can be divided into two general categories: modeling and control. Modeling techniques encompass the processing of data streams from sensors and the training of numerical models. These models enable us to predict the energy use of a building and of sub-systems, such as a heating, ventilation, and air conditioning (HVAC) unit. Specifically, we first present an ensemble learning method for the short-term forecasting of total electricity demand in buildings. As the deployment of intermittent renewable energy resources continues to rise, the generation of accurate building-level electricity demand forecasts will be valuable to both grid operators and building energy management systems. Second, we present a recursive parameter estimation technique for identifying a thermostatically controlled load (TCL) model that is non-linear in the parameters. For TCLs to perform demand response services in real-time markets, online methods for parameter estimation are needed. Third, we develop a piecewise linear thermal model of a residential building and train the model using data collected from a custom-built thermostat. This model is capable of approximating unmodeled dynamics within a building by learning from sensor data. Control techniques encompass the application of optimal control theory, model predictive control, and convex distributed optimization to TCLs. First, we present the alternative control trajectory (ACT) representation, a novel method for the approximate optimization of non-convex discrete systems. This approach enables the optimal control of a population of non-convex agents using distributed convex optimization techniques. Second, we present a distributed convex optimization algorithm for the control of a TCL population. Experimental results demonstrate the application of this algorithm to the problem of renewable energy generation following. This dissertation contributes to the development of intelligent energy management systems for buildings by presenting a suite of novel and adaptable modeling and control techniques. Applications focus on optimizing the performance of building operations and on facilitating the integration of renewable energy resources.
NASA Astrophysics Data System (ADS)
Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki
2016-12-01
In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.
Global dynamic optimization approach to predict activation in metabolic pathways.
de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R
2014-01-06
During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.
Treatment of systematic errors in land data assimilation systems
NASA Astrophysics Data System (ADS)
Crow, W. T.; Yilmaz, M.
2012-12-01
Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.
Commande de vol non lineaire d'un drone a voilure fixe par la methode du backstepping
NASA Astrophysics Data System (ADS)
Finoki, Edouard
This thesis describes the design of a non-linear controller for a UAV using the backstepping method. It is a fixed-wing UAV, the NexSTAR ARF from HobbicoRTM. The aim is to find the expressions of the aileron, the elevator, and the rudder deflection in order to command the flight path angle, the heading angle and the sideslip angle. Controlling the flight path angle allows a steady, climb or descent flight, controlling the heading cap allows to choose the heading and annul the sideslip angle allows an efficient flight. A good technical control has to ensure the stability of the system and provide optimal performances. Backstepping interlaces the choice of a Lyapunov function with the design of feedback control. This control technique works with the true non-linear model without any approximation. The procedure is to transform intermediate state variables into virtual inputs which will control other state variables. Advantages of this technique are its recursivity, its minimum control effort and its cascaded structure that allows dividing a high order system into several simpler lower order systems. To design this non-linear controller, a non-linear model of the UAV was used. Equations of motion are very accurate, aerodynamic coefficients result from interpolations between several essential variables in flight. The controller has been implemented in Matlab/Simulink and FlightGear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Southworth, Frank; Garrow, Dr. Laurie
This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less
Digital controllers for VTOL aircraft
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Broussard, J. R.; Berry, P. W.
1976-01-01
Using linear-optimal estimation and control techniques, digital-adaptive control laws have been designed for a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. Two distinct discrete-time control laws are designed to interface with velocity-command and attitude-command guidance logic, and each incorporates proportional-integral compensation for non-zero-set-point regulation, as well as reduced-order Kalman filters for sensor blending and noise rejection. Adaptation to flight condition is achieved with a novel gain-scheduling method based on correlation and regression analysis. The linear-optimal design approach is found to be a valuable tool in the development of practical multivariable control laws for vehicles which evidence significant coupling and insufficient natural stability.
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Wright, T. J.
2006-12-01
We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.
Comparisons of linear and nonlinear pyramid schemes for signal and image processing
NASA Astrophysics Data System (ADS)
Morales, Aldo W.; Ko, Sung-Jea
1997-04-01
Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.
ERIC Educational Resources Information Center
Foley, Greg
2011-01-01
Continuous feed and bleed ultrafiltration, modeled with the gel polarization model for the limiting flux, is shown to provide a rich source of non-linear algebraic equations that can be readily solved using numerical and graphical techniques familiar to undergraduate students. We present a variety of numerical problems in the design, analysis, and…
Optimization model of vaccination strategy for dengue transmission
NASA Astrophysics Data System (ADS)
Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.
2014-02-01
Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.
Multivariable optimization of liquid rocket engines using particle swarm algorithms
NASA Astrophysics Data System (ADS)
Jones, Daniel Ray
Liquid rocket engines are highly reliable, controllable, and efficient compared to other conventional forms of rocket propulsion. As such, they have seen wide use in the space industry and have become the standard propulsion system for launch vehicles, orbit insertion, and orbital maneuvering. Though these systems are well understood, historical optimization techniques are often inadequate due to the highly non-linear nature of the engine performance problem. In this thesis, a Particle Swarm Optimization (PSO) variant was applied to maximize the specific impulse of a finite-area combustion chamber (FAC) equilibrium flow rocket performance model by controlling the engine's oxidizer-to-fuel ratio and de Laval nozzle expansion and contraction ratios. In addition to the PSO-controlled parameters, engine performance was calculated based on propellant chemistry, combustion chamber pressure, and ambient pressure, which are provided as inputs to the program. The performance code was validated by comparison with NASA's Chemical Equilibrium with Applications (CEA) and the commercially available Rocket Propulsion Analysis (RPA) tool. Similarly, the PSO algorithm was validated by comparison with brute-force optimization, which calculates all possible solutions and subsequently determines which is the optimum. Particle Swarm Optimization was shown to be an effective optimizer capable of quick and reliable convergence for complex functions of multiple non-linear variables.
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
Machine learning techniques for energy optimization in mobile embedded systems
NASA Astrophysics Data System (ADS)
Donohoo, Brad Kyoshi
Mobile smartphones and other portable battery operated embedded systems (PDAs, tablets) are pervasive computing devices that have emerged in recent years as essential instruments for communication, business, and social interactions. While performance, capabilities, and design are all important considerations when purchasing a mobile device, a long battery lifetime is one of the most desirable attributes. Battery technology and capacity has improved over the years, but it still cannot keep pace with the power consumption demands of today's mobile devices. This key limiter has led to a strong research emphasis on extending battery lifetime by minimizing energy consumption, primarily using software optimizations. This thesis presents two strategies that attempt to optimize mobile device energy consumption with negligible impact on user perception and quality of service (QoS). The first strategy proposes an application and user interaction aware middleware framework that takes advantage of user idle time between interaction events of the foreground application to optimize CPU and screen backlight energy consumption. The framework dynamically classifies mobile device applications based on their received interaction patterns, then invokes a number of different power management algorithms to adjust processor frequency and screen backlight levels accordingly. The second strategy proposes the usage of machine learning techniques to learn a user's mobile device usage pattern pertaining to spatiotemporal and device contexts, and then predict energy-optimal data and location interface configurations. By learning where and when a mobile device user uses certain power-hungry interfaces (3G, WiFi, and GPS), the techniques, which include variants of linear discriminant analysis, linear logistic regression, non-linear logistic regression, and k-nearest neighbor, are able to dynamically turn off unnecessary interfaces at runtime in order to save energy.
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei; Chen, Xi; Lin, Xu-Dong; Tan, Ning
The mean first passage time (MFPT) in a phenomenological gene transcriptional regulatory model with non-Gaussian noise is analytically investigated based on the singular perturbation technique. The effect of the non-Gaussian noise on the phenomenon of stochastic resonance (SR) is then disclosed based on a new combination of adiabatic elimination and linear response approximation. Compared with the results in the Gaussian noise case, it is found that bounded non-Gaussian noise inhibits the transition between different concentrations of protein, while heavy-tailed non-Gaussian noise accelerates the transition. It is also found that the optimal noise intensity for SR in the heavy-tailed noise case is smaller, while the optimal noise intensity in the bounded noise case is larger. These observations can be explained by the heavy-tailed noise easing random transitions.
Is 3D true non linear traveltime tomography reasonable ?
NASA Astrophysics Data System (ADS)
Herrero, A.; Virieux, J.
2003-04-01
The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.
Optimum Damping in a Non-Linear Base Isolation System
NASA Astrophysics Data System (ADS)
Jangid, R. S.
1996-02-01
Optimum isolation damping for minimum acceleration of a base-isolated structure subjected to earthquake ground excitation is investigated. The stochastic model of the El-Centro1940 earthquake, which preserves the non-stationary evolution of amplitude and frequency content of ground motion, is used as an earthquake excitation. The base isolated structure consists of a linear flexible shear type multi-storey building supported on a base isolation system. The resilient-friction base isolator (R-FBI) is considered as an isolation system. The non-stationary stochastic response of the system is obtained by the time dependent equivalent linearization technique as the force-deformation of the R-FBI system is non-linear. The optimum damping of the R-FBI system is obtained under important parametric variations; i.e., the coefficient of friction of the R-FBI system, the period and damping of the superstructure; the effective period of base isolation. The criterion selected for optimality is the minimization of the top floor root mean square (r.m.s.) acceleration. It is shown that the above parameters have significant effects on optimum isolation damping.
Multi-objective experimental design for (13)C-based metabolic flux analysis.
Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel
2015-10-01
(13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi-objective design should stimulate its application within the field of (13)C-based metabolic flux analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.
Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe
2015-01-15
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Biyanto, T. R.; Matradji; Syamsi, M. N.; Fibrianto, H. Y.; Afdanny, N.; Rahman, A. H.; Gunawan, K. S.; Pratama, J. A. D.; Malwindasari, A.; Abdillah, A. I.; Bethiana, T. N.; Putra, Y. A.
2017-11-01
The development of green building has been growing in both design and quality. The development of green building was limited by the issue of expensive investment. Actually, green building can reduce the energy usage inside the building especially in utilization of cooling system. External load plays major role in reducing the usage of cooling system. External load is affected by type of wall sheathing, glass and roof. The proper selection of wall, type of glass and roof material are very important to reduce external load. Hence, the optimization of energy efficiency and conservation in green building design is required. Since this optimization consist of integer and non-linear equations, this problem falls into Mixed-Integer-Non-Linear-Programming (MINLP) that required global optimization technique such as stochastic optimization algorithms. In this paper the optimized variables i.e. type of glass and roof were chosen using Duelist, Killer-Whale and Rain-Water Algorithms to obtain the optimum energy and considering the minimal investment. The optimization results exhibited the single glass Planibel-G with the 3.2 mm thickness and glass wool insulation provided maximum ROI of 36.8486%, EUI reduction of 54 kWh/m2·year, CO2 emission reduction of 486.8971 tons/year and reduce investment of 4,078,905,465 IDR.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Comparison between DCA - SSO - VDR and VMAT dose delivery techniques for 15 SRS/SRT patients
NASA Astrophysics Data System (ADS)
Tas, B.; Durmus, I. F.
2018-02-01
To evaluate dose delivery between Dynamic Conformal Arc (DCA) - Segment Shape Optimization (SSO) - Variation Dose Rate (VDR) and Volumetric Modulated Arc Therapy (VMAT) techniques for fifteen SRS patients using Versa HD® lineer accelerator. Fifteen SRS / SRT patient's optimum treatment planning were performed using Monaco5.11® treatment planning system (TPS) with 1 coplanar and 3 non-coplanar fields for VMAT technique, then the plans were reoptimized with the same optimization parameters for DCA - SSO - VDR technique. The advantage of DCA - SSO - VDR technique were determined less MUs and beam on time, also larger segments decrease dosimetric uncertainities of small fields quality assurance. The advantage of VMAT technique were determined a little better GI, CI, PCI, brain V12Gy and brain mean dose. The results show that the clinical objectives and plans for both techniques satisfied all organs at risks (OARs) dose constraints. Depends on the shape and localization of target, we could choose one of these techniques for linear accelerator based SRS / SRT treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raimondi, Pantaleo
The design of the Stanford Linear Collider (SLC) called for a beam intensity far beyond what was practically achievable. This was due to intrinsic limitations in many subsystems and to a lack of understanding of the new physics of linear colliders. Real progress in improving the SLC performance came from precision, non-invasive diagnostics to measure and monitor the beams and from new techniques to control the emittance dilution and optimize the beams. A major contribution to the success of the last 1997-98 SLC run came from several innovative ideas for improving the performance of the Final Focus (FF). This papermore » describes some of the problems encountered and techniques used to overcome them. Building on the SLC experience, we will also present a new approach to the FF design for future high energy linear colliders.« less
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
An optimal control strategy for two-dimensional motion camouflage with non-holonimic constraints.
Rañó, Iñaki
2012-07-01
Motion camouflage is a stealth behaviour observed both in hover-flies and in dragonflies. Existing controllers for mimicking motion camouflage generate this behaviour on an empirical basis or without considering the kinematic motion restrictions present in animal trajectories. This study summarises our formal contributions to solve the generation of motion camouflage as a non-linear optimal control problem. The dynamics of the system capture the kinematic restrictions to motion of the agents, while the performance index ensures camouflage trajectories. An extensive set of simulations support the technique, and a novel analysis of the obtained trajectories contributes to our understanding of possible mechanisms to obtain sensor based motion camouflage, for instance, in mobile robots.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2015-09-01
An optimal trade-off design for fractional order (FO)-PID controller is proposed with a Linear Quadratic Regulator (LQR) based technique using two conflicting time domain objectives. A class of delayed FO systems with single non-integer order element, exhibiting both sluggish and oscillatory open loop responses, have been controlled here. The FO time delay processes are handled within a multi-objective optimization (MOO) formalism of LQR based FOPID design. A comparison is made between two contemporary approaches of stabilizing time-delay systems withinLQR. The MOO control design methodology yields the Pareto optimal trade-off solutions between the tracking performance and total variation (TV) of the control signal. Tuning rules are formed for the optimal LQR-FOPID controller parameters, using median of the non-dominated Pareto solutions to handle delayed FO processes. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
NASA Astrophysics Data System (ADS)
Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza
2017-06-01
Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.
Primal-dual techniques for online algorithms and mechanisms
NASA Astrophysics Data System (ADS)
Liaghat, Vahid
An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.
Enriched Imperialist Competitive Algorithm for system identification of magneto-rheological dampers
NASA Astrophysics Data System (ADS)
Talatahari, Siamak; Rahbari, Nima Mohajer
2015-10-01
In the current research, the imperialist competitive algorithm is dramatically enhanced and a new optimization method dubbed as Enriched Imperialist Competitive Algorithm (EICA) is effectively introduced to deal with high non-linear optimization problems. To conduct a close examination of its functionality and efficacy, the proposed metaheuristic optimization approach is actively employed to sort out the parameter identification of two different types of hysteretic Bouc-Wen models which are simulating the non-linear behavior of MR dampers. Two types of experimental data are used for the optimization problems to minutely examine the robustness of the proposed EICA. The obtained results self-evidently demonstrate the high adaptability of EICA to suitably get to the bottom of such non-linear and hysteretic problems.
Three dimensional radiative flow of magnetite-nanofluid with homogeneous-heterogeneous reactions
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed
2018-03-01
Present communication deals with the effects of homogeneous-heterogeneous reactions in flow of nanofluid by non-linear stretching sheet. Water based nanofluid containing magnetite nanoparticles is considered. Non-linear radiation and non-uniform heat sink/source effects are examined. Non-linear differential systems are computed by Optimal homotopy analysis method (OHAM). Convergent solutions of nonlinear systems are established. The optimal data of auxiliary variables is obtained. Impact of several non-dimensional parameters for velocity components, temperature and concentration fields are examined. Graphs are plotted for analysis of surface drag force and heat transfer rate.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems
NASA Astrophysics Data System (ADS)
Vasant, P.; Barsoum, N.
2010-06-01
In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.
Optimization techniques using MODFLOW-GWM
Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.
2015-01-01
An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.
Generation of multifocal irradiance patterns by using complex Fresnel holograms.
Mendoza-Yero, Omel; Carbonell-Leal, Miguel; Mínguez-Vega, Gladys; Lancis, Jesús
2018-03-01
We experimentally demonstrate Fresnel holograms able to produce multifocal irradiance patterns with micrometric spatial resolution. These holograms are assessed from the coherent sum of multiple Fresnel lenses. The utilized encoded technique guarantees full control over the reconstructed irradiance patterns due to an optimal codification of the amplitude and phase information of the resulting complex field. From a practical point of view, a phase-only spatial light modulator is used in a couple of experiments addressed to obtain two- and three-dimensional distributions of focal points to excite both linear and non-linear optical phenomena.
Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-09-01
Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.
Mission Operations Planning with Preferences: An Empirical Study
NASA Technical Reports Server (NTRS)
Bresina, John L.; Khatib, Lina; McGann, Conor
2006-01-01
This paper presents an empirical study of some nonexhaustive approaches to optimizing preferences within the context of constraint-based, mixed-initiative planning for mission operations. This work is motivated by the experience of deploying and operating the MAPGEN (Mixed-initiative Activity Plan GENerator) system for the Mars Exploration Rover Mission. Responsiveness to the user is one of the important requirements for MAPGEN, hence, the additional computation time needed to optimize preferences must be kept within reasonabble bounds. This was the primary motivation for studying non-exhaustive optimization approaches. The specific goals of rhe empirical study are to assess the impact on solution quality of two greedy heuristics used in MAPGEN and to assess the improvement gained by applying a linear programming optimization technique to the final solution.
NASA Astrophysics Data System (ADS)
Unger, Johannes; Hametner, Christoph; Jakubek, Stefan; Quasthoff, Marcus
2014-12-01
An accurate state of charge (SoC) estimation of a traction battery in hybrid electric non-road vehicles, which possess higher dynamics and power densities than on-road vehicles, requires a precise battery cell terminal voltage model. This paper presents a novel methodology for non-linear system identification of battery cells to obtain precise battery models. The methodology comprises the architecture of local model networks (LMN) and optimal model based design of experiments (DoE). Three main novelties are proposed: 1) Optimal model based DoE, which aims to high dynamically excite the battery cells at load ranges frequently used in operation. 2) The integration of corresponding inputs in the LMN to regard the non-linearities SoC, relaxation, hysteresis as well as temperature effects. 3) Enhancements to the local linear model tree (LOLIMOT) construction algorithm, to achieve a physical appropriate interpretation of the LMN. The framework is applicable for different battery cell chemistries and different temperatures, and is real time capable, which is shown on an industrial PC. The accuracy of the obtained non-linear battery model is demonstrated on cells with different chemistries and temperatures. The results show significant improvement due to optimal experiment design and integration of the battery non-linearities within the LMN structure.
Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.
2008-12-01
To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
NASA Astrophysics Data System (ADS)
Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin
2010-08-01
In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.
Optimization of Dynamic Aperture of PEP-X Baseline Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Min-Huey; /SLAC; Cai, Yunhai
2010-08-23
SLAC is developing a long-range plan to transfer the evolving scientific programs at SSRL from the SPEAR3 light source to a much higher performing photon source. Storage ring design is one of the possibilities that would be housed in the 2.2-km PEP-II tunnel. The design goal of PEPX storage ring is to approach an optimal light source design with horizontal emittance less than 100 pm and vertical emittance of 8 pm to reach the diffraction limit of 1-{angstrom} x-ray. The low emittance design requires a lattice with strong focusing leading to high natural chromaticity and therefore to strong sextupoles. Themore » latter caused reduction of dynamic aperture. The dynamic aperture requirement for horizontal injection at injection point is about 10 mm. In order to achieve the desired dynamic aperture the transverse non-linearity of PEP-X is studied. The program LEGO is used to simulate the particle motion. The technique of frequency map is used to analyze the nonlinear behavior. The effect of the non-linearity is tried to minimize at the given constrains of limited space. The details and results of dynamic aperture optimization are discussed in this paper.« less
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
Integer Linear Programming in Computational Biology
NASA Astrophysics Data System (ADS)
Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut
Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki
2012-01-01
Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970
Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad
2016-01-01
For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD 600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD 600 nm ): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.
Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A.; Soni, Nipunjot; Mandal, Raju K.; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y.; Govender, Thavendran; Kruger, Hendrik G.; Jawed, Arshad
2016-01-01
For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600 nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties. PMID:27920762
NASA Astrophysics Data System (ADS)
Zhu, Z. W.; Zhang, W. D.; Xu, J.
2014-03-01
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.
Lin, Weilu; Wang, Zejian; Huang, Mingzhi; Zhuang, Yingping; Zhang, Siliang
2018-06-01
The isotopically non-stationary 13C labelling experiments, as an emerging experimental technique, can estimate the intracellular fluxes of the cell culture under an isotopic transient period. However, to the best of our knowledge, the issue of the structural identifiability analysis of non-stationary isotope experiments is not well addressed in the literature. In this work, the local structural identifiability analysis for non-stationary cumomer balance equations is conducted based on the Taylor series approach. The numerical rank of the Jacobian matrices of the finite extended time derivatives of the measured fractions with respect to the free parameters is taken as the criterion. It turns out that only one single time point is necessary to achieve the structural identifiability analysis of the cascaded linear dynamic system of non-stationary isotope experiments. The equivalence between the local structural identifiability of the cascaded linear dynamic systems and the local optimum condition of the nonlinear least squares problem is elucidated in the work. Optimal measurements sets can then be determined for the metabolic network. Two simulated metabolic networks are adopted to demonstrate the utility of the proposed method. Copyright © 2018 Elsevier Inc. All rights reserved.
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
Chai, Hwa Kian; Liu, Kit Fook; Behnia, Arash; Yoshikazu, Kobayashi; Shiotani, Tomoki
2016-04-16
Concrete is the most ubiquitous construction material. Apart from the fresh and early age properties of concrete material, its condition during the structure life span affects the overall structural performance. Therefore, development of techniques such as non-destructive testing which enable the investigation of the material condition, are in great demand. Tomography technique has become an increasingly popular non-destructive evaluation technique for civil engineers to assess the condition of concrete structures. In the present study, this technique is investigated by developing reconstruction procedures utilizing different parameters of elastic waves, namely the travel time, wave amplitude, wave frequency, and Q-value. In the development of algorithms, a ray tracing feature was adopted to take into account the actual non-linear propagation of elastic waves in concrete containing defects. Numerical simulation accompanied by experimental verifications of wave motion were conducted to obtain wave propagation profiles in concrete containing honeycomb as a defect and in assessing the tendon duct filling of pre-stressed concrete (PC) elements. The detection of defects by the developed tomography reconstruction procedures was evaluated and discussed.
Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation
NASA Astrophysics Data System (ADS)
Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.
A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.
Stiffness optimization of non-linear elastic structures
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
2017-11-13
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Stiffness optimization of non-linear elastic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Soft tissue strain measurement using an optical method
NASA Astrophysics Data System (ADS)
Toh, Siew Lok; Tay, Cho Jui; Goh, Cho Hong James
2008-11-01
Digital image correlation (DIC) is a non-contact optical technique that allows the full-field estimation of strains on a surface under an applied deformation. In this project, the application of an optimized DIC technique is applied, which can achieve efficiency and accuracy in the measurement of two-dimensional deformation fields in soft tissue. This technique relies on matching the random patterns recorded in images to directly obtain surface displacements and to get displacement gradients from which the strain field can be determined. Digital image correlation is a well developed technique that has numerous and varied engineering applications, including the application in soft and hard tissue biomechanics. Chicken drumstick ligaments were harvested and used during the experiments. The surface of the ligament was speckled with black paint to allow for correlation to be done. Results show that the stress-strain curve exhibits a bi-linear behavior i.e. a "toe region" and a "linear elastic region". The Young's modulus obtained for the toe region is about 92 MPa and the modulus for the linear elastic region is about 230 MPa. The results are within the values for mammalian anterior cruciate ligaments of 150-300 MPa.
NASA Astrophysics Data System (ADS)
Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François
2018-04-01
Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem
NASA Astrophysics Data System (ADS)
Tangpatiphan, Kritsana; Yokoyama, Akihiko
This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.
Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann
2014-01-01
Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders. PMID:24466019
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com
2014-03-15
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less
NASA Astrophysics Data System (ADS)
Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao
2017-12-01
Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.
Beevi, K Sabeena; Nair, Madhu S; Bindu, G R
2016-08-01
The exact measure of mitotic nuclei is a crucial parameter in breast cancer grading and prognosis. This can be achieved by improving the mitotic detection accuracy by careful design of segmentation and classification techniques. In this paper, segmentation of nuclei from breast histopathology images are carried out by Localized Active Contour Model (LACM) utilizing bio-inspired optimization techniques in the detection stage, in order to handle diffused intensities present along object boundaries. Further, the application of a new optimal machine learning algorithm capable of classifying strong non-linear data such as Random Kitchen Sink (RKS), shows improved classification performance. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for MITOS-ATYPIA CONTEST 2014. The proposed framework achieved 95% recall, 98% precision and 96% F-score.
NASA Astrophysics Data System (ADS)
Haris, A.; Nafian, M.; Riyanto, A.
2017-07-01
Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Overcoming learning barriers through knowledge management.
Dror, Itiel E; Makany, Tamas; Kemp, Jonathan
2011-02-01
The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and financial sector. We examined comprehension, accuracy, mental imagery & complexity, metacognition, and memory. We found that participants with dyslexia, when using a non-linear note-taking technique outperformed the control group using linear note-taking and matched the performance of the control group using non-linear note-taking. These findings emphasize how different knowledge management techniques can avoid some of the barriers to learners. Copyright © 2010 John Wiley & Sons, Ltd.
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
MIDACO on MINLP space applications
NASA Astrophysics Data System (ADS)
Schlueter, Martin; Erb, Sven O.; Gerdts, Matthias; Kemble, Stephen; Rückmann, Jan-J.
2013-04-01
A numerical study on two challenging mixed-integer non-linear programming (MINLP) space applications and their optimization with MIDACO, a recently developed general purpose optimization software, is presented. These applications are the optimal control of the ascent of a multiple-stage space launch vehicle and the space mission trajectory design from Earth to Jupiter using multiple gravity assists. Additionally, an NLP aerospace application, the optimal control of an F8 aircraft manoeuvre, is discussed and solved. In order to enhance the optimization performance of MIDACO a hybridization technique, coupling MIDACO with an SQP algorithm, is presented for two of these three applications. The numerical results show, that the applications can be solved to their best known solution (or even new best solution) in a reasonable time by the considered approach. Since using the concept of MINLP is still a novelty in the field of (aero)space engineering, the demonstrated capabilities are seen as very promising.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
Active distribution network planning considering linearized system loss
NASA Astrophysics Data System (ADS)
Li, Xiao; Wang, Mingqiang; Xu, Hao
2018-02-01
In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Detection of epileptiform activity in EEG signals based on time-frequency and non-linear analysis
Gajic, Dragoljub; Djurovic, Zeljko; Gligorijevic, Jovan; Di Gennaro, Stefano; Savic-Gajic, Ivana
2015-01-01
We present a new technique for detection of epileptiform activity in EEG signals. After preprocessing of EEG signals we extract representative features in time, frequency and time-frequency domain as well as using non-linear analysis. The features are extracted in a few frequency sub-bands of clinical interest since these sub-bands showed much better discriminatory characteristics compared with the whole frequency band. Then we optimally reduce the dimension of feature space to two using scatter matrices. A decision about the presence of epileptiform activity in EEG signals is made by quadratic classifiers designed in the reduced two-dimensional feature space. The accuracy of the technique was tested on three sets of electroencephalographic (EEG) signals recorded at the University Hospital Bonn: surface EEG signals from healthy volunteers, intracranial EEG signals from the epilepsy patients during the seizure free interval from within the seizure focus and intracranial EEG signals of epileptic seizures also from within the seizure focus. An overall detection accuracy of 98.7% was achieved. PMID:25852534
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1989-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of partial differential equation solutions in the least squares norm.
Optimal moving grids for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Wathen, A. J.
1992-01-01
Various adaptive moving grid techniques for the numerical solution of time-dependent partial differential equations were proposed. The precise criterion for grid motion varies, but most techniques will attempt to give grids on which the solution of the partial differential equation can be well represented. Moving grids are investigated on which the solutions of the linear heat conduction and viscous Burgers' equation in one space dimension are optimally approximated. Precisely, the results of numerical calculations of optimal moving grids for piecewise linear finite element approximation of PDE solutions in the least-squares norm are reported.
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-05-01
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small example and demonstrate their relevance for metabolic engineering with realistic models of E. coli. We develop a comprehensive mathematical framework for yield optimization in metabolic models. Our theory is particularly useful for the study and rational modification of cell factories designed under given yield and/or rate requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Linear, non-linear and thermal properties of single crystal of LHMHCl
NASA Astrophysics Data System (ADS)
Kulshrestha, Shobha; Shrivastava, A. K.
2018-05-01
The single crystal of amino acid of L-histidine monohydrochloride was grown by slow evaporation technique at room temperature. High optical quality and appropriate size of crystals were grown under optimized growth conditions. The grown crystals were transparent. Crystals are characterized with different characterizations such as Solubility test, UV-Visible, optical band gap (Eg). With the help of optical data to be calculate absorption coefficient (α), extinction coefficient (k), refractive index (n), dielectric constant (ɛ). These optical constants are shows favorable conditions for photonics devices. Second harmonic generation (NLO) test show the green light emission which is confirm that crystal have properties for laser application. Thermal stability of grown crystal is confirmed by TG/DTA.
NASA Technical Reports Server (NTRS)
Craun, Robert W.; Acosta, Diana M.; Beard, Steven D.; Leonard, Michael W.; Hardy, Gordon H.; Weinstein, Michael; Yildiz, Yildiray
2013-01-01
This paper describes the maturation of a control allocation technique designed to assist pilots in the recovery from pilot induced oscillations (PIOs). The Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) is designed to enable next generation high efficiency aircraft designs. Energy efficient next generation aircraft require feedback control strategies that will enable lowering the actuator rate limit requirements for optimal airframe design. One of the common issues flying with actuator rate limits is PIOs caused by the phase lag between the pilot inputs and control surface response. CAPIO utilizes real-time optimization for control allocation to eliminate phase lag in the system caused by control surface rate limiting. System impacts of the control allocator were assessed through a piloted simulation evaluation of a non-linear aircraft simulation in the NASA Ames Vertical Motion Simulator. Results indicate that CAPIO helps reduce oscillatory behavior, including the severity and duration of PIOs, introduced by control surface rate limiting.
H(2)- and H(infinity)-design tools for linear time-invariant systems
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi
1989-01-01
Recent advances in optimal control have brought design techniques based on optimization of H(2) and H(infinity) norm criteria, closer to be attractive alternatives to single-loop design methods for linear time-variant systems. Significant steps forward in this technology are the deeper understanding of performance and robustness issues of these design procedures and means to perform design trade-offs. However acceptance of the technology is hindered by the lack of convenient design tools to exercise these powerful multivariable techniques, while still allowing single-loop design formulation. Presented is a unique computer tool for designing arbitrary low-order linear time-invarient controllers than encompasses both performance and robustness issues via the familiar H(2) and H(infinity) norm optimization. Application to disturbance rejection design for a commercial transport is demonstrated.
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Nonlinear Transient Growth and Boundary Layer Transition
NASA Technical Reports Server (NTRS)
Paredes, Pedro; Choudhari, Meelan M.; Li, Fei
2016-01-01
Parabolized stability equations (PSE) are used in a variational approach to study the optimal, non-modal disturbance growth in a Mach 3 at plate boundary layer and a Mach 6 circular cone boundary layer. As noted in previous works, the optimal initial disturbances correspond to steady counter-rotating streamwise vortices, which subsequently lead to the formation of streamwise-elongated structures, i.e., streaks, via a lift-up effect. The nonlinear evolution of the linearly optimal stationary perturbations is computed using the nonlinear plane-marching PSE for stationary perturbations. A fully implicit marching technique is used to facilitate the computation of nonlinear streaks with large amplitudes. To assess the effect of the finite-amplitude streaks on transition, the linear form of plane- marching PSE is used to investigate the instability of the boundary layer flow modified by spanwise periodic streaks. The onset of bypass transition is estimated by using an N- factor criterion based on the amplification of the streak instabilities. Results show that, for both flow configurations of interest, streaks of sufficiently large amplitude can lead to significantly earlier onset of transition than that in an unperturbed boundary layer without any streaks.
NASA Astrophysics Data System (ADS)
Villanueva Perez, Carlos Hernan
Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.
Non-linear optical flow cytometry using a scanned, Bessel beam light-sheet.
Collier, Bradley B; Awasthi, Samir; Lieu, Deborah K; Chan, James W
2015-05-29
Modern flow cytometry instruments have become vital tools for high-throughput analysis of single cells. However, as issues with the cellular labeling techniques often used in flow cytometry have become more of a concern, the development of label-free modalities for cellular analysis is increasingly desired. Non-linear optical phenomena (NLO) are of growing interest for label-free analysis because of the ability to measure the intrinsic optical response of biomolecules found in cells. We demonstrate that a light-sheet consisting of a scanned Bessel beam is an optimal excitation geometry for efficiently generating NLO signals in a microfluidic environment. The balance of photon density and cross-sectional area provided by the light-sheet allowed significantly larger two-photon fluorescence intensities to be measured in a model polystyrene microparticle system compared to measurements made using other excitation focal geometries, including a relaxed Gaussian excitation beam often used in conventional flow cytometers.
Simultaneous analysis and design
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1984-01-01
Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.
Temporal Gain Correction for X-Ray Calorimeter Spectrometers
NASA Technical Reports Server (NTRS)
Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.
2016-01-01
Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.
Engine Yaw Augmentation for Hybrid-Wing-Body Aircraft via Optimal Control Allocation Techniques
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Yoo, Seung Yeun
2011-01-01
Asymmetric engine thrust was implemented in a hybrid-wing-body non-linear simulation to reduce the amount of aerodynamic surface deflection required for yaw stability and control. Hybrid-wing-body aircraft are especially susceptible to yaw surface deflection due to their decreased bare airframe yaw stability resulting from the lack of a large vertical tail aft of the center of gravity. Reduced surface deflection, especially for trim during cruise flight, could reduce the fuel consumption of future aircraft. Designed as an add-on, optimal control allocation techniques were used to create a control law that tracks total thrust and yaw moment commands with an emphasis on not degrading the baseline system. Implementation of engine yaw augmentation is shown and feasibility is demonstrated in simulation with a potential drag reduction of 2 to 4 percent. Future flight tests are planned to demonstrate feasibility in a flight environment.
Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan
2009-01-01
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.
A review on prognostic techniques for non-stationary and non-linear rotating systems
NASA Astrophysics Data System (ADS)
Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph
2015-10-01
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Linear triangular optimization technique and pricing scheme in residential energy management systems
NASA Astrophysics Data System (ADS)
Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad
2018-06-01
This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.
Data mining-based coefficient of influence factors optimization of test paper reliability
NASA Astrophysics Data System (ADS)
Xu, Peiyao; Jiang, Huiping; Wei, Jieyao
2018-05-01
Test is a significant part of the teaching process. It demonstrates the final outcome of school teaching through teachers' teaching level and students' scores. The analysis of test paper is a complex operation that has the characteristics of non-linear relation in the length of the paper, time duration and the degree of difficulty. It is therefore difficult to optimize the coefficient of influence factors under different conditions in order to get text papers with clearly higher reliability with general methods [1]. With data mining techniques like Support Vector Regression (SVR) and Genetic Algorithm (GA), we can model the test paper analysis and optimize the coefficient of impact factors for higher reliability. It's easy to find that the combination of SVR and GA can get an effective advance in reliability from the test results. The optimal coefficient of influence factors optimization has a practicability in actual application, and the whole optimizing operation can offer model basis for test paper analysis.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Insight into efficient image registration techniques and the demons algorithm.
Vercauteren, Tom; Pennec, Xavier; Malis, Ezio; Perchant, Aymeric; Ayache, Nicholas
2007-01-01
As image registration becomes more and more central to many biomedical imaging applications, the efficiency of the algorithms becomes a key issue. Image registration is classically performed by optimizing a similarity criterion over a given spatial transformation space. Even if this problem is considered as almost solved for linear registration, we show in this paper that some tools that have recently been developed in the field of vision-based robot control can outperform classical solutions. The adequacy of these tools for linear image registration leads us to revisit non-linear registration and allows us to provide interesting theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage to the symmetric forces variant of the demons algorithm. We show that, on controlled experiments, this advantage is confirmed, and yields a faster convergence.
Application of optimal control theory to the design of the NASA/JPL 70-meter antenna servos
NASA Technical Reports Server (NTRS)
Alvarez, L. S.; Nickerson, J.
1989-01-01
The application of Linear Quadratic Gaussian (LQG) techniques to the design of the 70-m axis servos is described. Linear quadratic optimal control and Kalman filter theory are reviewed, and model development and verification are discussed. Families of optimal controller and Kalman filter gain vectors were generated by varying weight parameters. Performance specifications were used to select final gain vectors.
Non-rigid Motion Correction in 3D Using Autofocusing with Localized Linear Translations
Cheng, Joseph Y.; Alley, Marcus T.; Cunningham, Charles H.; Vasanawala, Shreyas S.; Pauly, John M.; Lustig, Michael
2012-01-01
MR scans are sensitive to motion effects due to the scan duration. To properly suppress artifacts from non-rigid body motion, complex models with elements such as translation, rotation, shear, and scaling have been incorporated into the reconstruction pipeline. However, these techniques are computationally intensive and difficult to implement for online reconstruction. On a sufficiently small spatial scale, the different types of motion can be well-approximated as simple linear translations. This formulation allows for a practical autofocusing algorithm that locally minimizes a given motion metric – more specifically, the proposed localized gradient-entropy metric. To reduce the vast search space for an optimal solution, possible motion paths are limited to the motion measured from multi-channel navigator data. The novel navigation strategy is based on the so-called “Butterfly” navigators which are modifications to the spin-warp sequence that provide intrinsic translational motion information with negligible overhead. With a 32-channel abdominal coil, sufficient number of motion measurements were found to approximate possible linear motion paths for every image voxel. The correction scheme was applied to free-breathing abdominal patient studies. In these scans, a reduction in artifacts from complex, non-rigid motion was observed. PMID:22307933
Progress in multidisciplinary design optimization at NASA Langley
NASA Technical Reports Server (NTRS)
Padula, Sharon L.
1993-01-01
Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
Linear Quantum Systems: Non-Classical States and Robust Stability
2016-06-29
quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control
NASA Technical Reports Server (NTRS)
Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.
1999-01-01
An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW). Weights information is obtained from correlations of data from three sources: 1) as-built initial structural and non-structural weights from an existing database, 2) theoretical FEM structural weights and sensitivities from Genesis, and 3) empirical as-built weight increments, non-structural weights, and weight sensitivities from FLOPS. For the aeroelastic analysis, a variable-fidelity aerodynamic analysis has been adopted. This approach uses infrequent CPU-intensive non-linear CFD to calculate a non-linear correction relative to a linear aero calculation for the same aerodynamic surface at an angle of attack that results in the same configuration lift. For efficiency, this nonlinear correction is applied after each subsequent linear aero solution during the iterations between the aerodynamic and structural analyses. Convergence is achieved when the vehicle shape being used for the aerodynamic calculations is consistent with the structural deformations caused by the aerodynamic loads. To make the structural analyses more efficient, a linearized structural deformation model has been adopted, in which a single stiffness matrix can be used to solve for the deformations under all the load conditions. Using the converged aerodynamic loads, a final set of structural analyses are performed to determine the stress distributions and the buckling conditions for constraint calculation. Performance constraints are obtained by running FLOPS using drag polars that are computed using results from non-linear corrections to the linear aero code plus several codes to provide drag increments due to skin friction, wave drag, and other miscellaneous drag contributions. The status of the integration effort will be presented in the proposed paper, and results will be provided that illustrate the degree of accuracy in the linearizations that have been employed.
Analysis techniques for multivariate root loci. [a tool in linear control systems
NASA Technical Reports Server (NTRS)
Thompson, P. M.; Stein, G.; Laub, A. J.
1980-01-01
Analysis and techniques are developed for the multivariable root locus and the multivariable optimal root locus. The generalized eigenvalue problem is used to compute angles and sensitivities for both types of loci, and an algorithm is presented that determines the asymptotic properties of the optimal root locus.
Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun
2008-05-15
Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software
A computational algorithm for spacecraft control and momentum management
NASA Technical Reports Server (NTRS)
Dzielski, John; Bergmann, Edward; Paradiso, Joseph
1990-01-01
Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.
NASA Technical Reports Server (NTRS)
Korte, J. J.; Auslender, A. H.
1993-01-01
A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.
A holistic approach to movement education in sport and fitness: a systems based model.
Polsgrove, Myles Jay
2012-01-01
The typical model used by movement professionals to enhance performance relies on the notion that a linear increase in load results in steady and progressive gains, whereby, the greater the effort, the greater the gains in performance. Traditional approaches to movement progression typically rely on the proper sequencing of extrinsically based activities to facilitate the individual in reaching performance objectives. However, physical rehabilitation or physical performance rarely progresses in such a linear fashion; instead they tend to evolve non-linearly and rather unpredictably. A dynamic system can be described as an entity that self-organizes into increasingly complex forms. Applying this view to the human body, practitioners could facilitate non-linear performance gains through a systems based programming approach. Utilizing a dynamic systems view, the Holistic Approach to Movement Education (HADME) is a model designed to optimize performance by accounting for non-linear and self-organizing traits associated with human movement. In this model, gains in performance occur through advancing individual perspectives and through optimizing sub-system performance. This inward shift of the focus of performance creates a sharper self-awareness and may lead to more optimal movements. Copyright © 2011 Elsevier Ltd. All rights reserved.
LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging
NASA Astrophysics Data System (ADS)
De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace
2006-03-01
In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).
SNDR enhancement in noisy sinusoidal signals by non-linear processing elements
NASA Astrophysics Data System (ADS)
Martorell, Ferran; McDonnell, Mark D.; Abbott, Derek; Rubio, Antonio
2007-06-01
We investigate the possibility of building linear amplifiers capable of enhancing the Signal-to-Noise and Distortion Ratio (SNDR) of sinusoidal input signals using simple non-linear elements. Other works have proven that it is possible to enhance the Signal-to-Noise Ratio (SNR) by using limiters. In this work we study a soft limiter non-linear element with and without hysteresis. We show that the SNDR of sinusoidal signals can be enhanced by 0.94 dB using a wideband soft limiter and up to 9.68 dB using a wideband soft limiter with hysteresis. These results indicate that linear amplifiers could be constructed using non-linear circuits with hysteresis. This paper presents mathematical descriptions for the non-linear elements using statistical parameters. Using these models, the input-output SNDR enhancement is obtained by optimizing the non-linear transfer function parameters to maximize the output SNDR.
NASA Astrophysics Data System (ADS)
Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li
2017-01-01
Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour
Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad
2013-01-01
A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843
NASA Technical Reports Server (NTRS)
Hein, C.; Meystel, A.
1994-01-01
There are many multi-stage optimization problems that are not easily solved through any known direct method when the stages are coupled. For instance, we have investigated the problem of planning a vehicle's control sequence to negotiate obstacles and reach a goal in minimum time. The vehicle has a known mass, and the controlling forces have finite limits. We have developed a technique that finds admissible control trajectories which tend to minimize the vehicle's transit time through the obstacle field. The immediate applications is that of a space robot which must rapidly traverse around 2-or-3 dimensional structures via application of a rotating thruster or non-rotating on-off for such vehicles is located at the Marshall Space Flight Center in Huntsville Alabama. However, it appears that the development method is applicable to a general set of optimization problems in which the cost function and the multi-dimensional multi-state system can be any nonlinear functions, which are continuous in the operating regions. Other applications included the planning of optimal navigation pathways through a transversability graph; the planning of control input for under-water maneuvering vehicles which have complex control state-space relationships; the planning of control sequences for milling and manufacturing robots; the planning of control and trajectories for automated delivery vehicles; and the optimization and athletic training in slalom sports.
Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W
2009-03-01
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1973-01-01
The conventional six-engine reaction control jet relay attitude control law with deadband is shown to be a good linear approximation to a weighted time-fuel optimal control law. Techniques for evaluating the value of the relative weighting between time and fuel for a particular relay control law is studied along with techniques to interrelate other parameters for the two control laws. Vehicle attitude control laws employing control moment gyros are then investigated. Steering laws obtained from the expression for the reaction torque of the gyro configuration are compared to a total optimal attitude control law that is derived from optimal linear regulator theory. This total optimal attitude control law has computational disadvantages in the solving of the matrix Riccati equation. Several computational algorithms for solving the matrix Riccati equation are investigated with respect to accuracy, computational storage requirements, and computational speed.
A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
1996-01-01
Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.
Synthesis Methods for Robust Passification and Control
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)
2000-01-01
The research effort under this cooperative agreement has been essentially the continuation of the work from previous grants. The ongoing work has primarily focused on developing passivity-based control techniques for Linear Time-Invariant (LTI) systems. During this period, there has been a significant progress made in the area of passivity-based control of LTI systems and some preliminary results have also been obtained for nonlinear systems, as well. The prior work has addressed optimal control design for inherently passive as well as non- passive linear systems. For exploiting the robustness characteristics of passivity-based controllers the passification methodology was developed for LTI systems that are not inherently passive. Various methods of passification were first proposed in and further developed. The robustness of passification was addressed for multi-input multi-output (MIMO) systems for certain classes of uncertainties using frequency-domain methods. For MIMO systems, a state-space approach using Linear Matrix Inequality (LMI)-based formulation was presented, for passification of non-passive LTI systems. An LMI-based robust passification technique was presented for systems with redundant actuators and sensors. The redundancy in actuators and sensors was used effectively for robust passification using the LMI formulation. The passification was designed to be robust to an interval-type uncertainties in system parameters. The passification techniques were used to design a robust controller for Benchmark Active Control Technology wing under parametric uncertainties. The results on passive nonlinear systems, however, are very limited to date. Our recent work in this area was presented, wherein some stability results were obtained for passive nonlinear systems that are affine in control.
Cooperative inversion of magnetotelluric and seismic data sets
NASA Astrophysics Data System (ADS)
Markovic, M.; Santos, F.
2012-04-01
Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal
A robust optimization methodology for preliminary aircraft design
NASA Astrophysics Data System (ADS)
Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.
2016-05-01
This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.
A robust, efficient equidistribution 2D grid generation method
NASA Astrophysics Data System (ADS)
Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni
2007-11-01
We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
NASA Astrophysics Data System (ADS)
Aksikas, I.; Moghadam, A. Alizadeh; Forbes, J. F.
2018-04-01
This paper deals with the design of an optimal state-feedback linear-quadratic (LQ) controller for a system of coupled parabolic-hypebolic non-autonomous partial differential equations (PDEs). The infinite-dimensional state space representation and the corresponding operator Riccati differential equation are used to solve the control problem. Dynamical properties of the coupled system of interest are analysed to guarantee the existence and uniqueness of the solution of the LQ-optimal control problem and also to guarantee the exponential stability of the closed-loop system. Thanks to the eigenvalues and eigenfunctions of the parabolic operator and also the fact that the hyperbolic-associated operator Riccati differential equation can be converted to a scalar Riccati PDE, an algorithm to solve the LQ control problem has been presented. The results are applied to a non-isothermal packed-bed catalytic reactor. The LQ optimal controller designed in the early portion of the paper is implemented for the original non-linear model. Numerical simulations are performed to show the controller performances.
Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F
2012-01-01
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Non-linear dynamic compensation system
NASA Technical Reports Server (NTRS)
Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)
1992-01-01
A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.
Approximate optimal guidance for the advanced launch system
NASA Technical Reports Server (NTRS)
Feeley, T. S.; Speyer, J. L.
1993-01-01
A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.
1999-01-01
Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.
Passivity-based Robust Control of Aerospace Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.; Joshi, Suresh M. (Technical Monitor)
2000-01-01
This report provides a brief summary of the research work performed over the duration of the cooperative research agreement between NASA Langley Research Center and Kansas State University. The cooperative agreement which was originally for the duration the three years was extended by another year through no-cost extension in order to accomplish the goals of the project. The main objective of the research was to develop passivity-based robust control methodology for passive and non-passive aerospace systems. The focus of the first-year's research was limited to the investigation of passivity-based methods for the robust control of Linear Time-Invariant (LTI) single-input single-output (SISO), open-loop stable, minimum-phase non-passive systems. The second year's focus was mainly on extending the passivity-based methodology to a larger class of non-passive LTI systems which includes unstable and nonminimum phase SISO systems. For LTI non-passive systems, five different passification. methods were developed. The primary effort during the years three and four was on the development of passification methodology for MIMO systems, development of methods for checking robustness of passification, and developing synthesis techniques for passifying compensators. For passive LTI systems optimal synthesis procedure was also developed for the design of constant-gain positive real controllers. For nonlinear passive systems, numerical optimization-based technique was developed for the synthesis of constant as well as time-varying gain positive-real controllers. The passivity-based control design methodology developed during the duration of this project was demonstrated by its application to various benchmark examples. These example systems included longitudinal model of an F-18 High Alpha Research Vehicle (HARV) for pitch axis control, NASA's supersonic transport wind tunnel model, ACC benchmark model, 1-D acoustic duct model, piezo-actuated flexible link model, and NASA's Benchmark Active Controls Technology (BACT) Wing model. Some of the stability results for linear passive systems were also extended to nonlinear passive systems. Several publications and conference presentations resulted from this research.
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
Computation of non-monotonic Lyapunov functions for continuous-time systems
NASA Astrophysics Data System (ADS)
Li, Huijuan; Liu, AnPing
2017-09-01
In this paper, we propose two methods to compute non-monotonic Lyapunov functions for continuous-time systems which are asymptotically stable. The first method is to solve a linear optimization problem on a compact and bounded set. The proposed linear programming based algorithm delivers a CPA1
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Analysis of periodically excited non-linear systems by a parametric continuation technique
NASA Astrophysics Data System (ADS)
Padmanabhan, C.; Singh, R.
1995-07-01
The dynamic behavior and frequency response of harmonically excited piecewise linear and/or non-linear systems has been the subject of several recent investigations. Most of the prior studies employed harmonic balance or Galerkin schemes, piecewise linear techniques, analog simulation and/or direct numerical integration (digital simulation). Such techniques are somewhat limited in their ability to predict all of the dynamic characteristics, including bifurcations leading to the occurrence of unstable, subharmonic, quasi-periodic and/or chaotic solutions. To overcome this problem, a parametric continuation scheme, based on the shooting method, is applied specifically to a periodically excited piecewise linear/non-linear system, in order to improve understanding as well as to obtain the complete dynamic response. Parameter regions exhibiting bifurcations to harmonic, subharmonic or quasi-periodic solutions are obtained quite efficiently and systematically. Unlike other techniques, the proposed scheme can follow period-doubling bifurcations, and with some modifications obtain stable quasi-periodic solutions and their bifurcations. This knowledge is essential in establishing conditions for the occurrence of chaotic oscillations in any non-linear system. The method is first validated through the Duffing oscillator example, the solutions to which are also obtained by conventional one-term harmonic balance and perturbation methods. The second example deals with a clearance non-linearity problem for both harmonic and periodic excitations. Predictions from the proposed scheme match well with available analog simulation data as well as with multi-term harmonic balance results. Potential savings in computational time over direct numerical integration is demonstrated for some of the example cases. Also, this work has filled in some of the solution regimes for an impact pair, which were missed previously in the literature. Finally, one main limitation associated with the proposed procedure is discussed.
Efficient QoS-aware Service Composition
NASA Astrophysics Data System (ADS)
Alrifai, Mohammad; Risse, Thomas
Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.
Xie, Zicong; Pang, Daxin; Wang, Kankan; Li, Mengjing; Guo, Nannan; Yuan, Hongming; Li, Jianing; Zou, Xiaodong; Jiao, Huping; Ouyang, Hongsheng; Li, Zhanjun; Tang, Xiaochun
2017-06-08
Genetically modified pigs have important roles in agriculture and biomedicine. However, genome-specific knock-in techniques in pigs are still in their infancy and optimal strategies have not been extensively investigated. In this study, we performed electroporation to introduce a targeting donor vector (a non-linearized vector that did not contain a promoter or selectable marker) into Porcine Foetal Fibroblasts (PFFs) along with a CRISPR/Cas9 vector. After optimization, the efficiency of the EGFP site-specific knock-in could reach up to 29.6% at the pRosa26 locus in PFFs. Next, we used the EGFP reporter PFFs to address two key conditions in the process of achieving transgenic pigs, the limiting dilution method and the strategy to evaluate the safety and feasibility of the knock-in locus. This study demonstrates that we establish an efficient procedures for the exogenous gene knock-in technique and creates a platform to efficiently generate promoter-less and selectable marker-free transgenic PFFs through the CRISPR/Cas9 system. This study should contribute to the generation of promoter-less and selectable marker-free transgenic pigs and it may provide insights into sophisticated site-specific genome engineering techniques for additional species.
NASA Astrophysics Data System (ADS)
Pando, V.; García-Laguna, J.; San-José, L. A.
2012-11-01
In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
Solving Fractional Programming Problems based on Swarm Intelligence
NASA Astrophysics Data System (ADS)
Raouf, Osama Abdel; Hezam, Ibrahim M.
2014-04-01
This paper presents a new approach to solve Fractional Programming Problems (FPPs) based on two different Swarm Intelligence (SI) algorithms. The two algorithms are: Particle Swarm Optimization, and Firefly Algorithm. The two algorithms are tested using several FPP benchmark examples and two selected industrial applications. The test aims to prove the capability of the SI algorithms to solve any type of FPPs. The solution results employing the SI algorithms are compared with a number of exact and metaheuristic solution methods used for handling FPPs. Swarm Intelligence can be denoted as an effective technique for solving linear or nonlinear, non-differentiable fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set, can be solved using the proposed approach. Numerical examples are given to show the feasibility, effectiveness, and robustness of the proposed algorithm. The results obtained using the two SI algorithms revealed the superiority of the proposed technique among others in computational time. A better accuracy was remarkably observed in the solution results of the industrial application problems.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho
2007-03-01
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
Optimal Transient Growth of Submesoscale Baroclinic Instabilities
NASA Astrophysics Data System (ADS)
White, Brian; Zemskova, Varvara; Passaggia, Pierre-Yves
2016-11-01
Submesoscale instabilities are analyzed using a transient growth approach to determine the optimal perturbation for a rotating Boussinesq fluid subject to baroclinic instabilities. We consider a base flow with uniform shear and stratification and consider the non-normal evolution over finite-time horizons of linear perturbations in an ageostrophic, non-hydrostatic regime. Stone (1966, 1971) showed that the stability of the base flow to normal modes depends on the Rossby and Richardson numbers, with instabilities ranging from geostrophic (Ro -> 0) and ageostrophic (finite Ro) baroclinic modes to symmetric (Ri < 1 , Ro > 1) and Kelvin-Helmholtz (Ri < 1 / 4) modes. Non-normal transient growth, initiated by localized optimal wave packets, represents a faster mechanism for the growth of perturbations and may provide an energetic link between large-scale flows in geostrophic balance and dissipation scales via submesoscale instabilities. Here we consider two- and three-dimensional optimal perturbations by means of direct-adjoint iterations of the linearized Boussinesq Navier-Stokes equations to determine the form of the optimal perturbation, the optimal energy gain, and the characteristics of the most unstable perturbation.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
NASA Technical Reports Server (NTRS)
Ardalan, Sasan H.
1992-01-01
Two narrow-band radar systems are developed for high resolution target range estimation in inhomogeneous media. They are reformulations of two presently existing systems such that high resolution target range estimates may be achieved despite the use of narrow bandwidth radar pulses. A double sideband suppressed carrier radar technique originally derived in 1962, and later abandoned due to its inability to accurately measure target range in the presence of an interfering reflection, is rederived to incorporate the presence of an interfering reflection. The new derivation shows that the interfering reflection causes a period perturbation in the measured phase response. A high resolution spectral estimation technique is used to extract the period of this perturbation leading to accurate target range estimates independent of the signal-to-interference ratio. A non-linear optimal signal processing algorithm is derived for a frequency-stepped continuous wave radar system. The resolution enhancement offered by optimal signal processing of the data over the conventional Fourier Transform technique is clearly demonstrated using measured radar data. A method for modeling plane wave propagation in inhomogeneous media based on transmission line theory is derived and studied. Several simulation results including measurement of non-uniform electron plasma densities that develop near the heat tiles of a space re-entry vehicle are presented which verify the validity of the model.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
An Extended Microcomputer-Based Network Optimization Package.
1982-10-01
Analysis, Laxenberq, Austria, 1981, pp. 781-808. 9. Anton , H., Elementary Linear Algebra , John Wiley & Sons, New York, 1977. 10. Koopmans, T. C...fCaRUlue do leVee. aide It 001100"M OW eedea9f’ OF Nooke~e Network, generalized network, microcomputer, optimization, network with gains, linear ...Oboe &111111041 network problem, in turn, can be viewed as a specialization of a linear programuing problem having at most two non-zero entries in each
SIMD Optimization of Linear Expressions for Programmable Graphics Hardware
Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang
2009-01-01
The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2017-11-01
Merging radar and rain gauge rainfall data is a technique used to improve the quality of spatial rainfall estimates and in particular the use of Kriging with External Drift (KED) is a very effective radar-rain gauge rainfall merging technique. However, kriging interpolations assume Gaussianity of the process. Rainfall has a strongly skewed, positive, probability distribution, characterized by a discontinuity due to intermittency. In KED rainfall residuals are used, implicitly calculated as the difference between rain gauge data and a linear function of the radar estimates. Rainfall residuals are non-Gaussian as well. The aim of this work is to evaluate the impact of applying KED to non-Gaussian rainfall residuals, and to assess the best techniques to improve Gaussianity. We compare Box-Cox transformations with λ parameters equal to 0.5, 0.25, and 0.1, Box-Cox with time-variant optimization of λ, normal score transformation, and a singularity analysis technique. The results suggest that Box-Cox with λ = 0.1 and the singularity analysis is not suitable for KED. Normal score transformation and Box-Cox with optimized λ, or λ = 0.25 produce satisfactory results in terms of Gaussianity of the residuals, probability distribution of the merged rainfall products, and rainfall estimate quality, when validated through cross-validation. However, it is observed that Box-Cox transformations are strongly dependent on the temporal and spatial variability of rainfall and on the units used for the rainfall intensity. Overall, applying transformations results in a quantitative improvement of the rainfall estimates only if the correct transformations for the specific data set are used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phinney, N.
The SLAC Linear Collider (SLC) is the first example of an entirely new type of lepton collider. Many years of effort were required to develop the understanding and techniques needed to approach design luminosity. This paper discusses some of the key issues and problems encountered in producing a working linear collider. These include the polarized source, techniques for emittance preservation, extensive feedback systems, and refinements in beam optimization in the final focus. The SLC experience has been invaluable for testing concepts and developing designs for a future linear collider.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Mathematical Optimization Techniques
NASA Technical Reports Server (NTRS)
Bellman, R. (Editor)
1963-01-01
The papers collected in this volume were presented at the Symposium on Mathematical Optimization Techniques held in the Santa Monica Civic Auditorium, Santa Monica, California, on October 18-20, 1960. The objective of the symposium was to bring together, for the purpose of mutual education, mathematicians, scientists, and engineers interested in modern optimization techniques. Some 250 persons attended. The techniques discussed included recent developments in linear, integer, convex, and dynamic programming as well as the variational processes surrounding optimal guidance, flight trajectories, statistical decisions, structural configurations, and adaptive control systems. The symposium was sponsored jointly by the University of California, with assistance from the National Science Foundation, the Office of Naval Research, the National Aeronautics and Space Administration, and The RAND Corporation, through Air Force Project RAND.
Evaluating forest management policies by parametric linear programing
Daniel I. Navon; Richard J. McConnen
1967-01-01
An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.
A trajectory generation framework for modeling spacecraft entry in MDAO
NASA Astrophysics Data System (ADS)
D`Souza, Sarah N.; Sarigul-Klijn, Nesrin
2016-04-01
In this paper a novel trajectory generation framework was developed that optimizes trajectory event conditions for use in a Generalized Entry Guidance algorithm. The framework was developed to be adaptable via the use of high fidelity equations of motion and drag based analytical bank profiles. Within this framework, a novel technique was implemented that resolved the sensitivity of the bank profile to atmospheric non-linearities. The framework's adaptability was established by running two different entry bank conditions. Each case yielded a reference trajectory and set of transition event conditions that are flight feasible and implementable in a Generalized Entry Guidance algorithm.
The design of digital-adaptive controllers for VTOL aircraft
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Broussard, J. R.; Berry, P. W.
1976-01-01
Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-05
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
NASA Astrophysics Data System (ADS)
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.
NASA Astrophysics Data System (ADS)
Kumar Singh, Vinay; Dalal, U. D.
2017-06-01
To inhibit the effect of non-linearity of the LEDs leading to a significant increase in the peak to average power ratio (PAPR) of the OFDM signals in the Visible light communication (VLC) we propose a frequency modulated constant envelope OFDM (FM CE-OFDM) technique. The abrupt amplitude variations in the OFDM signal are frequency modulated before being applied to the LED for electro-optical conversion resulting in a constant envelope signal. The LED is maintained in the linear region of operation by this constant envelope signal at sufficient DC bias. The proposed technique reduces the PAPR to the least possible value ≈0 dB. We theoretically analyze and perform numerical simulations to assess the enhancement of the proposed system. The optimal modulation index is found to be 0.3. The metrics pertaining to the evaluation of the phase discontinuity is derived and is found to be lesser for the FM CE-OFDM as compared to the phase modulated (PM) CE-OFDM. The receiver sensitivity is improved by 1.6 dB for a transmission distance of 2 m for the FM CE-OFDM as compared to the PM CE-OFDM at the FEC threshold. We compare the BER performance of the ideal OFDM (without the non linearity of LED), power back-off OFDM, PM CE-OFDM and FM CE-OFDM in an optical wireless channel (OWC) scenario. The FM CE-OFDM has an improvement of 2.1 dB SNR at the FEC threshold as compared to the PM CE-OFDM. It also shows an improvement of 11 dB when compared with the power back-off technique used in the VLC systems for 10 dB power back-off.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
NASA Astrophysics Data System (ADS)
Abdeh-Kolahchi, A.; Satish, M.; Datta, B.
2004-05-01
A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.
NASA Astrophysics Data System (ADS)
Bertolesi, Elisa; Milani, Gabriele; Poggi, Carlo
2016-12-01
Two FE modeling techniques are presented and critically discussed for the non-linear analysis of tuff masonry panels reinforced with FRCM and subjected to standard diagonal compression tests. The specimens, tested at the University of Naples (Italy), are unreinforced and FRCM retrofitted walls. The extensive characterization of the constituent materials allowed adopting here very sophisticated numerical modeling techniques. In particular, here the results obtained by means of a micro-modeling strategy and homogenization approach are compared. The first modeling technique is a tridimensional heterogeneous micro-modeling where constituent materials (bricks, joints, reinforcing mortar and reinforcing grid) are modeled separately. The second approach is based on a two-step homogenization procedure, previously developed by the authors, where the elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. The non-linear structural analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM). All the simulations here presented are performed using the commercial software Abaqus. Pros and cons of the two approaches are herein discussed with reference to their reliability in reproducing global force-displacement curves and crack patterns, as well as to the rather different computational effort required by the two strategies.
Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan
2016-01-01
In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.
NASA Astrophysics Data System (ADS)
Hosseini, K.; Ayati, Z.; Ansari, R.
2018-04-01
One specific class of non-linear evolution equations, known as the Tzitzéica-type equations, has received great attention from a group of researchers involved in non-linear science. In this article, new exact solutions of the Tzitzéica-type equations arising in non-linear optics, including the Tzitzéica, Dodd-Bullough-Mikhailov and Tzitzéica-Dodd-Bullough equations, are obtained using the expa function method. The integration technique actually suggests a useful and reliable method to extract new exact solutions of a wide range of non-linear evolution equations.
NASA Technical Reports Server (NTRS)
Oakley, Celia M.; Barratt, Craig H.
1990-01-01
Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.
EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter
Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.
2012-01-01
A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Design principles and operating principles: the yin and yang of optimal functioning.
Voit, Eberhard O
2003-03-01
Metabolic engineering has as a goal the improvement of yield of desired products from microorganisms and cell lines. This goal has traditionally been approached with experimental biotechnological methods, but it is becoming increasingly popular to precede the experimental phase by a mathematical modeling step that allows objective pre-screening of possible improvement strategies. The models are either linear and represent the stoichiometry and flux distribution in pathways or they are non-linear and account for the full kinetic behavior of the pathway, which is often significantly effected by regulatory signals. Linear flux analysis is simpler and requires less input information than a full kinetic analysis, and the question arises whether the consideration of non-linearities is really necessary for devising optimal strategies for yield improvements. The article analyzes this question with a generic, representative pathway. It shows that flux split ratios, which are the key criterion for linear flux analysis, are essentially sufficient for unregulated, but not for regulated branch points. The interrelationships between regulatory design on one hand and optimal patterns of operation on the other suggest the investigation of operating principles that complement design principles, like a user's manual complements the hardwiring of electronic equipment.
X-ray computed tomography to study rice (Oryza sativa L.) panicle development
Jhala, Vibhuti M.; Thaker, Vrinda S.
2015-01-01
Computational tomography is an important technique for developing digital agricultural models that may help farmers and breeders for increasing crop quality and yield. In the present study an attempt has been made to understand rice seed development within the panicle at different developmental stages using this technique. During the first phase of cell division the Hounsfield Unit (HU) value remained low, increased in the dry matter accumulation phase, and finally reached a maximum at the maturation stage. HU value and seed dry weight showed a linear relationship in the varieties studied. This relationship was confirmed subsequently using seven other varieties. This is therefore an easy, simple, and non-invasive technique which may help breeders to select the best varieties. In addition, it may also help farmers to optimize post-anthesis agronomic practices as well as deciding the crop harvest time for higher grain yield. PMID:26265763
End State: The Fallacy of Modern Military Planning
2017-04-06
operational planning for non -linear, complex scenarios requires application of non -linear, advanced planning techniques such as design methodology ...cannot be approached in a linear, mechanistic manner by a universal planning methodology . Theater/global campaign plans and theater strategies offer no...strategic environments, and instead prescribes a universal linear methodology that pays no mind to strategic complexity. This universal application
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Asnoune, M; Abdelmalek, F; Djelloul, A; Mesghouni, K; Addou, A
2016-11-01
In household waste matters, the objective is always to conceive an optimal integrated system of management, where the terms 'optimal' and 'integrated' refer generally to a combination between the waste and the techniques of treatment, valorization and elimination, which often aim at the lowest possible cost. The management optimization of household waste using operational methodologies has not yet been applied in any Algerian district. We proposed an optimization of the valorization of household waste in Tiaret city in order to lower the total management cost. The methodology is modelled by non-linear mathematical equations using 28 variables of decision and aims to assign optimally the seven components of household waste (i.e. plastic, cardboard paper, glass, metals, textiles, organic matter and others) among four centres of treatment [i.e. waste to energy (WTE) or incineration, composting (CM), anaerobic digestion (ANB) or methanization and landfilling (LF)]. The analysis of the obtained results shows that the variation of total cost is mainly due to the assignment of waste among the treatment centres and that certain treatment cannot be applied to household waste in Tiaret city. On the other hand, certain techniques of valorization have been favoured by the optimization. In this work, four scenarios have been proposed to optimize the system cost, where the modelling shows that the mixed scenario (the three treatment centres CM, ANB, LF) suggests a better combination of technologies of waste treatment, with an optimal solution for the system (cost and profit). © The Author(s) 2016.
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
Techniques for Single System Integration of Elastic Simulation Features
NASA Astrophysics Data System (ADS)
Mitchell, Nathan M.
Techniques for simulating the behavior of elastic objects have matured considerably over the last several decades, tackling diverse problems from non-linear models for incompressibility to accurate self-collisions. Alongside these contributions, advances in parallel hardware design and algorithms have made simulation more efficient and affordable than ever before. However, prior research often has had to commit to design choices that compromise certain simulation features to better optimize others, resulting in a fragmented landscape of solutions. For complex, real-world tasks, such as virtual surgery, a holistic approach is desirable, where complex behavior, performance, and ease of modeling are supported equally. This dissertation caters to this goal in the form of several interconnected threads of investigation, each of which contributes a piece of an unified solution. First, it will be demonstrated how various non-linear materials can be combined with lattice deformers to yield simulations with behavioral richness and a high potential for parallelism. This potential will be exploited to show how a hybrid solver approach based on large macroblocks can accelerate the convergence of these deformers. Further extensions of the lattice concept with non-manifold topology will allow for efficient processing of self-collisions and topology change. Finally, these concepts will be explored in the context of a case study on virtual plastic surgery, demonstrating a real-world problem space where these ideas can be combined to build an expressive authoring tool, allowing surgeons to record procedures digitally for future reference or education.
NASA Astrophysics Data System (ADS)
Manzanares, Carlos; Diaz, Marlon; Barton, Ann; Nyaupane, Parashu R.
2017-06-01
The thermal lens technique is applied to vibrational overtone spectroscopy of solutions of naphthalene in n-hexane. The pump and probe thermal lens technique is found to be very sensitive for detecting samples of low composition (ppm) in transparent solvents. In this experiment two different probe lasers: one at 488 nm and another 568 nm were used. The C-H fifth vibrational overtone spectrum of benzene is detected at room temperature for different concentrations. A plot of normalized integrated intensity as a function of concentration of naphthalene in solution reveals a non-linear behavior at low concentrations when using the 488 nm probe and a linear behavior over the entire range of concentrations when using the 568 nm probe. The non-linearity cannot be explained assuming solvent enhancement at low concentrations. A two color absorption model that includes the simultaneous absorption of the pump and probe lasers could explain the enhanced magnitude and the non-linear behavior of the thermal lens signal. Other possible mechanisms will also be discussed.
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Wall, Johnm W.
2011-01-01
In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
NASA Technical Reports Server (NTRS)
Tag, I. A.; Lumsdaine, E.
1978-01-01
The general non-linear three-dimensional equation for acoustic potential is derived by using a perturbation technique. The linearized axisymmetric equation is then solved by using a finite element algorithm based on the Galerkin formulation for a harmonic time dependence. The solution is carried out in complex number notation for the acoustic velocity potential. Linear, isoparametric, quadrilateral elements with non-uniform distribution across the duct section are implemented. The resultant global matrix is stored in banded form and solved by using a modified Gauss elimination technique. Sound pressure levels and acoustic velocities are calculated from post element solutions. Different duct geometries are analyzed and compared with experimental results.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
SPX: The Tenth International Conference on Stochastic Programming
2004-10-01
On structuring energy contract portfolios in competitive markets . Antonio Alonso-Ayuso, Universidad Rey Juan Carlos. (p. 28) 2. Mean-risk optimization ...ThA 8:00-9:30 Ballroom South: Portfolio Optimization Chair: Gerd Infanger, Stanford University 1. The impact of serial correlation of returns on ... the L-shaped method is to approximate the non-linear penalty term in the objective by a linear one . We use the implicit LX
Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
Batch-mode Reinforcement Learning for improved hydro-environmental systems management
NASA Astrophysics Data System (ADS)
Castelletti, A.; Galelli, S.; Restelli, M.; Soncini-Sessa, R.
2010-12-01
Despite the great progresses made in the last decades, the optimal management of hydro-environmental systems still remains a very active and challenging research area. The combination of multiple, often conflicting interests, high non-linearities of the physical processes and the management objectives, strong uncertainties in the inputs, and high dimensional state makes the problem challenging and intriguing. Stochastic Dynamic Programming (SDP) is one of the most suitable methods for designing (Pareto) optimal management policies preserving the original problem complexity. However, it suffers from a dual curse, which, de facto, prevents its practical application to even reasonably complex water systems. (i) Computational requirement grows exponentially with state and control dimension (Bellman's curse of dimensionality), so that SDP can not be used with water systems where the state vector includes more than few (2-3) units. (ii) An explicit model of each system's component is required (curse of modelling) to anticipate the effects of the system transitions, i.e. any information included into the SDP framework can only be either a state variable described by a dynamic model or a stochastic disturbance, independent in time, with the associated pdf. Any exogenous information that could effectively improve the system operation cannot be explicitly considered in taking the management decision, unless a dynamic model is identified for each additional information, thus adding to the problem complexity through the curse of dimensionality (additional state variables). To mitigate this dual curse, the combined use of batch-mode Reinforcement Learning (bRL) and Dynamic Model Reduction (DMR) techniques is explored in this study. bRL overcomes the curse of modelling by replacing explicit modelling with an external simulator and/or historical observations. The curse of dimensionality is averted using a functional approximation of the SDP value function based on proper non-linear regressors. DMR reduces the complexity and the associated computational requirements of non-linear distributed process based models, making them suitable for being included into optimization schemes. Results from real world applications of the approach are also presented, including reservoir operation with both quality and quantity targets.
A study of data analysis techniques for the multi-needle Langmuir probe
NASA Astrophysics Data System (ADS)
Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.
2018-06-01
In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benini, Marco, E-mail: mbenini87@gmail.com, E-mail: mbenini@uni-potsdam.de
Being motivated by open questions in gauge field theories, we consider non-standard de Rham cohomology groups for timelike compact and spacelike compact support systems. These cohomology groups are shown to be isomorphic respectively to the usual de Rham cohomology of a spacelike Cauchy surface and its counterpart with compact support. Furthermore, an analog of the usual Poincaré duality for de Rham cohomology is shown to hold for the case with non-standard supports as well. We apply these results to find optimal spaces of linear observables for analogs of arbitrary degree k of both the vector potential and the Faraday tensor.more » The term optimal has to be intended in the following sense: The spaces of linear observables we consider distinguish between different configurations; in addition to that, there are no redundant observables. This last point in particular heavily relies on the analog of Poincaré duality for the new cohomology groups.« less
Non-invasive absolute measurement of leaf water content using terahertz quantum cascade lasers.
Baldacci, Lorenzo; Pagano, Mario; Masini, Luca; Toncelli, Alessandra; Carelli, Giorgio; Storchi, Paolo; Tredicucci, Alessandro
2017-01-01
Plant water resource management is one of the main future challenges to fight recent climatic changes. The knowledge of the plant water content could be indispensable for water saving strategies. Terahertz spectroscopic techniques are particularly promising as a non-invasive tool for measuring leaf water content, thanks to the high predominance of the water contribution to the total leaf absorption. Terahertz quantum cascade lasers (THz QCL) are one of the most successful sources of THz radiation. Here we present a new method which improves the precision of THz techniques by combining a transmission measurement performed using a THz QCL source, with simple pictures of leaves taken by an optical camera. As a proof of principle, we performed transmission measurements on six plants of Vitis vinifera L. (cv "Colorino"). We found a linear law which relates the leaf water mass to the product between the leaf optical depth in the THz and the projected area. Results are in optimal agreement with the proposed law, which reproduces the experimental data with 95% accuracy. This method may overcome the issues related to intra-variety heterogeneities and retrieve the leaf water mass in a fast, simple, and non-invasive way. In the future this technique could highlight different behaviours in preserving the water status during drought stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaohu; Shi, Di; Wang, Zhiwei
Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus systemmore » demonstrate the effectiveness of the proposed planning model.« less
Neural networks: What non-linearity to choose
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Quintana, Chris
1991-01-01
Neural networks are now one of the most successful learning formalisms. Neurons transform inputs (x(sub 1),...,x(sub n)) into an output f(w(sub 1)x(sub 1) + ... + w(sub n)x(sub n)), where f is a non-linear function and w, are adjustable weights. What f to choose? Usually the logistic function is chosen, but sometimes the use of different functions improves the practical efficiency of the network. The problem of choosing f as a mathematical optimization problem is formulated and solved under different optimality criteria. As a result, a list of functions f that are optimal under these criteria are determined. This list includes both the functions that were empirically proved to be the best for some problems, and some new functions that may be worth trying.
Fuzzy Logic Controlled Solar Module for Driving Three- Phase Induction Motor
NASA Astrophysics Data System (ADS)
Afiqah Zainal, Nurul; Sooi Tat, Chan; Ajisman
2016-02-01
Renewable energy produced by solar module gives advantages for generated three- phase induction motor in remote area. But, solar module's ou tput is uncertain and complex. Fuzzy logic controller is one of controllers that can handle non-linear system and maximum power of solar module. Fuzzy logic controller used for Maximum Power Point Tracking (MPPT) technique to control Pulse-Width Modulation (PWM) for switching power electronics circuit. DC-DC boost converter used to boost up photovoltaic voltage to desired output and supply voltage source inverter which controlled by three-phase PWM generated by microcontroller. IGBT switched Voltage source inverter (VSI) produced alternating current (AC) voltage from direct current (DC) source to control speed of three-phase induction motor from boost converter output. Results showed that, the output power of solar module is optimized and controlled by using fuzzy logic controller. Besides that, the three-phase induction motor can be drive and control using VSI switching by the PWM signal generated by the fuzzy logic controller. This concluded that the non-linear system can be controlled and used in driving three-phase induction motor.
Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu
2013-01-01
With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.
1991-05-01
Static Non-Linearity 106 0 y = f(dx/dt) = -f(-dx/dt) = = > Static Non-Linearity • y = f(x,sign(dx/dt)) = = > Hysteresis-Type Non-Linearity = -f(-x,sign... Havilland Division Garratt Blvd., Downsview Ontario M3K I Y5 Canada CONTENTS ABSTRACT NOTATION 1. INTRODUCTION 2. THE SDG GUST MODEL 3. ESTABLISHING CRITICAL...VENT ETRE ADRESSEES DIRECTEMENT N AU SERVICE NATIONAL TECHNIQUE, Dh INFORMATION (NTIS) DONT LADRESSE SUIT AGENCES DE VENTE National Technical
LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.
Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong
2017-03-01
In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829
Prado, Igor Afonso Acampora; Pereira, Mateus de Freitas Virgílio; de Castro, Davi Ferreira; Dos Santos, Davi Antônio; Balthazar, Jose Manoel
2018-06-01
The present paper is concerned with the design and experimental evaluation of optimal control laws for the nonlinear attitude dynamics of a multirotor aerial vehicle. Three design methods based on Hamilton-Jacobi-Bellman equation are taken into account. The first one is a linear control with guarantee of stability for nonlinear systems. The second and third are a nonlinear suboptimal control techniques. These techniques are based on an optimal control design approach that takes into account the nonlinearities present in the vehicle dynamics. The stability Proof of the closed-loop system is presented. The performance of the control system designed is evaluated via simulations and also via an experimental scheme using the Quanser 3-DOF Hover. The experiments show the effectiveness of the linear control method over the nonlinear strategy. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman
2018-07-01
A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.
NASA Astrophysics Data System (ADS)
Cazzulani, Gabriele; Resta, Ferruccio; Ripamonti, Francesco
2012-04-01
During the last years, more and more mechanical applications saw the introduction of active control strategies. In particular, the need of improving the performances and/or the system health is very often associated to vibration suppression. This goal can be achieved considering both passive and active solutions. In this sense, many active control strategies have been developed, such as the Independent Modal Space Control (IMSC) or the resonant controllers (PPF, IRC, . . .). In all these cases, in order to tune and optimize the control strategy, the knowledge of the system dynamic behaviour is very important and it can be achieved both considering a numerical model of the system or through an experimental identification process. Anyway, dealing with non-linear or time-varying systems, a tool able to online identify the system parameters becomes a key-point for the control logic synthesis. The aim of the present work is the definition of a real-time technique, based on ARMAX models, that estimates the system parameters starting from the measurements of piezoelectric sensors. These parameters are returned to the control logic, that automatically adapts itself to the system dynamics. The problem is numerically investigated considering a carbon-fiber plate model forced through a piezoelectric patch.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, Brendan; Polizzi, Eric
2013-03-01
The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.
Optimization of a bundle divertor for FED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, L.M.; Rothe, K.E.; Minkoff, M.
1982-01-01
Optimal double-T bundle divertor configurations have been obtained for the Fusion Engineering Device (FED). On-axis ripple is minimized, while satisfying a series of engineering constraints. The ensuing non-linear optimization problem is solved via a sequence of quadratic programming subproblems, using the VMCON algorithm. The resulting divertor designs are substantially improved over previous configurations.
Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.
2017-04-26
Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.
Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less
NASA Astrophysics Data System (ADS)
Uma Maheswari, R.; Umamaheswari, R.
2017-02-01
Condition Monitoring System (CMS) substantiates potential economic benefits and enables prognostic maintenance in wind turbine-generator failure prevention. Vibration Monitoring and Analysis is a powerful tool in drive train CMS, which enables the early detection of impending failure/damage. In variable speed drives such as wind turbine-generator drive trains, the vibration signal acquired is of non-stationary and non-linear. The traditional stationary signal processing techniques are inefficient to diagnose the machine faults in time varying conditions. The current research trend in CMS for drive-train focuses on developing/improving non-linear, non-stationary feature extraction and fault classification algorithms to improve fault detection/prediction sensitivity and selectivity and thereby reducing the misdetection and false alarm rates. In literature, review of stationary signal processing algorithms employed in vibration analysis is done at great extent. In this paper, an attempt is made to review the recent research advances in non-linear non-stationary signal processing algorithms particularly suited for variable speed wind turbines.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
Assessing non-uniqueness: An algebraic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, Don W.
Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venghaus, Florian; Eisfeld, Wolfgang, E-mail: wolfgang.eisfeld@uni-bielefeld.de
2016-03-21
Robust diabatization techniques are key for the development of high-dimensional coupled potential energy surfaces (PESs) to be used in multi-state quantum dynamics simulations. In the present study we demonstrate that, besides the actual diabatization technique, common problems with the underlying electronic structure calculations can be the reason why a diabatization fails. After giving a short review of the theoretical background of diabatization, we propose a method based on the block-diagonalization to analyse the electronic structure data. This analysis tool can be used in three different ways: First, it allows to detect issues with the ab initio reference data and ismore » used to optimize the setup of the electronic structure calculations. Second, the data from the block-diagonalization are utilized for the development of optimal parametrized diabatic model matrices by identifying the most significant couplings. Third, the block-diagonalization data are used to fit the parameters of the diabatic model, which yields an optimal initial guess for the non-linear fitting required by standard or more advanced energy based diabatization methods. The new approach is demonstrated by the diabatization of 9 electronic states of the propargyl radical, yielding fully coupled full-dimensional (12D) PESs in closed form.« less
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
NASA Technical Reports Server (NTRS)
Becus, G. A.; Lui, C. Y.; Venkayya, V. B.; Tischler, V. A.
1987-01-01
A method for simultaneous structural and control design of large flexible space structures (LFSS) to reduce vibration generated by disturbances is presented. Desired natural frequencies and damping ratios for the closed loop system are achieved by using a combination of linear quadratic regulator (LQR) synthesis and numerical optimization techniques. The state and control weighing matrices (Q and R) are expressed in terms of structural parameters such as mass and stiffness. The design parameters are selected by numerical optimization so as to minimize the weight of the structure and to achieve the desired closed-loop eigenvalues. An illustrative example of the design of a two bar truss is presented.
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E.; Patel, Bhargav A.; Ambudkar, Suresh V.; Talele, Tanaji T.
2014-01-01
Multidrug resistance (MDR) caused by ATP-binding cassette (ABC) transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure to cancer chemotherapy. Previously, selenazole containing cyclic peptides were reported as P-gp inhibitors and these were also used for co-crystallization with mouse P-gp, which has 87% homology to human P-gp. It has been reported that human P-gp, can simultaneously accommodate 2-3 moderate size molecules at the drug binding pocket. Our in-silico analysis based on the homology model of human P-gp spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at drug-binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity and the structural form (linear and cyclic) of valine-derived thiazole peptides that can accommodate well in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear- (13) and cyclic-trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 = 1.5 μM). Cyclic trimer and linear trimer being equipotent, future studies can be focused on non-cyclic counterparts of cyclic peptides maintaining linear trimer length. Binding model of the linear trimer (13) within the drug-binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the non-cyclic form. PMID:24288265
Nonreciprocal Signal Routing in an Active Quantum Network
NASA Astrophysics Data System (ADS)
Tureci, Hakan E.; Metelmann, Anja
As superconductor quantum technologies are moving towards large-scale integrated circuits, a robust and flexible approach to routing photons at the quantum level becomes a critical problem. Active circuits, which contain driven linear or non-linear elements judiciously embedded in the circuit offer a viable solution. We present a general strategy for routing non-reciprocally quantum signals between two sites of a given lattice of resonators, implementable with existing superconducting circuit components. Our approach makes use of a dual lattice of superconducting non-linear elements on the links connecting the nodes of the main lattice. Solutions for spatially selective driving of the link-elements can be found, which optimally balance coherent and dissipative hopping of microwave photons to non-reciprocally route signals between two given nodes. In certain lattices these optimal solutions are obtained at the exceptional point of the scattering matrix of the network. The presented strategy provides a design space that is governed by a dynamically tunable non-Hermitian generator that can be used to minimize the added quantum noise as well. This work was supported by the U.S. Army Research Office (ARO) under Grant No. W911NF-15-1-0299.
The DCU: the detector control unit of the SAFARI instrument onboard SPICA
NASA Astrophysics Data System (ADS)
Clénet, A.; Ravera, L.; Bertrand, B.; Cros, A.; Hou, R.; Jackson, B. D.; van Leeuwen, B. J.; Van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.; Ta, N.
2012-09-01
The SpicA FAR infrared Instrument (SAFARI) is a European instrument for the infrared domain telescope SPICA, a JAXA space mission. The SAFARI detectors are Transistor Edge Sensors (TES) arranged in 3 matrixes. The TES front end electronic is based on Superconducting Quantum Interference Devices (SQUIDs) and it does the readout of the 3500 detectors with Frequency Division Multiplexing (FDM) type architecture. The Detector Control Unit (DCU), contributed by IRAP, manages the readout of the TES by computing and providing the AC-bias signals (1 - 3 MHz) to the TES and by computing the demodulation of the returning signals. The SQUID being highly non-linear, the DCU has also to provide a feedback signal to increase the SQUID dynamic. Because of the propagation delay in the cables and the processing time, a classic feedback will not be stable for AC-bias frequencies up to 3 MHz. The DCU uses a specific technique to compensate for those delays: the BaseBand FeedBack (BBFB). This digital data processing is done for the 3500 pixels in parallel. Thus, to keep the DCU power budget within its allocation we have to specifically optimize the architecture of the digital circuit with respect to the power consumption. In this paper we will mainly present the DCU architecture. We will particularly focus on the BBFB technique used to linearize the SQUID and on the optimization done to reduce the power consumption of the digital processing circuit.
Comparison of lossless compression techniques for prepress color images
NASA Astrophysics Data System (ADS)
Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.
1998-12-01
In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.
Noise and linearity optimization methods for a 1.9GHz low noise amplifier.
Guo, Wei; Huang, Da-Quan
2003-01-01
Noise and linearity performances are critical characteristics for radio frequency integrated circuits (RFICs), especially for low noise amplifiers (LNAs). In this paper, a detailed analysis of noise and linearity for the cascode architecture, a widely used circuit structure in LNA designs, is presented. The noise and the linearity improvement techniques for cascode structures are also developed and have been proven by computer simulating experiments. Theoretical analysis and simulation results showed that, for cascode structure LNAs, the first metallic oxide semiconductor field effect transistor (MOSFET) dominates the noise performance of the LNA, while the second MOSFET contributes more to the linearity. A conclusion is thus obtained that the first and second MOSFET of the LNA can be designed to optimize the noise performance and the linearity performance separately, without trade-offs. The 1.9GHz Complementary Metal-Oxide-Semiconductor (CMOS) LNA simulation results are also given as an application of the developed theory.
SU-E-T-270: Optimized Shielding Calculations for Medical Linear Accelerators (LINACs).
Muhammad, W; Lee, S; Hussain, A
2012-06-01
The purpose of radiation shielding is to reduce the effective equivalent dose from a medical linear accelerator (LINAC) to a point outside the room to a level determined by individual state/international regulations. The study was performed to design LINAC's room for newly planned radiotherapy centers. Optimized shielding calculations were performed for LINACs having maximum photon energy of 20 MV based on NCRP 151. The maximum permissible dose limits were kept 0.04 mSv/week and 0.002 mSv/week for controlled and uncontrolled areas respectively by following ALARA principle. The planned LINAC's room was compared to the already constructed (non-optimized) LINAC's room to evaluate the shielding costs and the other facilities those are directly related to the room design. In the evaluation process it was noted that the non-optimized room size (i.e., 610 × 610 cm 2 or 20 feet × 20 feet) is not suitable for total body irradiation (TBI) although the machine installed inside was having not only the facility of TBI but the license was acquired. By keeping this point in view, the optimized INAC's room size was kept 762 × 762 cm 2. Although, the area of the optimized rooms was greater than the non-planned room (i.e., 762 × 762 cm 2 instead of 610 × 610 cm 2), the shielding cost for the optimized LINAC's rooms was reduced by 15%. When optimized shielding calculations were re-performed for non-optimized shielding room (i.e., keeping room size, occupancy factors, workload etc. same), it was found that the shielding cost may be lower to 41 %. In conclusion, non- optimized LINAC's room can not only put extra financial burden on the hospital but also can cause of some serious issues related to providing health care facilities for patients. © 2012 American Association of Physicists in Medicine.
Evolutionary Optimization of Centrifugal Nozzles for Organic Vapours
NASA Astrophysics Data System (ADS)
Persico, Giacomo
2017-03-01
This paper discusses the shape-optimization of non-conventional centrifugal turbine nozzles for Organic Rankine Cycle applications. The optimal aerodynamic design is supported by the use of a non-intrusive, gradient-free technique specifically developed for shape optimization of turbomachinery profiles. The method is constructed as a combination of a geometrical parametrization technique based on B-Splines, a high-fidelity and experimentally validated Computational Fluid Dynamic solver, and a surrogate-based evolutionary algorithm. The non-ideal gas behaviour featuring the flow of organic fluids in the cascades of interest is introduced via a look-up-table approach, which is rigorously applied throughout the whole optimization process. Two transonic centrifugal nozzles are considered, featuring very different loading and radial extension. The use of a systematic and automatic design method to such a non-conventional configuration highlights the character of centrifugal cascades; the blades require a specific and non-trivial definition of the shape, especially in the rear part, to avoid the onset of shock waves. It is shown that the optimization acts in similar way for the two cascades, identifying an optimal curvature of the blade that both provides a relevant increase of cascade performance and a reduction of downstream gradients.
An optimized resistor pattern for temperature gradient control in microfluidics
NASA Astrophysics Data System (ADS)
Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline
2009-06-01
In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.
NASA Astrophysics Data System (ADS)
Cano-Lara, Miroslava; Severiano-Carrillo, Israel; Trejo-Durán, Mónica; Alvarado-Méndez, Edgar
2017-09-01
In this work, we present a study of non-linear optical response in thin films elaborated with Gelite Bloom and extract of Hibiscus Sabdariffa. Non-linear refraction and absorption effects were studied experimentally (Z-scan technique) and numerically, by considering the transmittance as non-linear absorption and refraction contribution. We observe large phase shifts to far field, and diffraction due to self-phase modulation of the sample. Diffraction and self-diffraction effects were observed as time function. The aim of studying non-linear optical properties in thin films is to eliminate thermal vortex effects that occur in liquids. This is desirable in applications such as non-linear phase contrast, optical limiting, optics switches, etc. Finally, we find good agreement between experimental and theoretical results.
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
All-in-one model for designing optimal water distribution pipe networks
NASA Astrophysics Data System (ADS)
Aklog, Dagnachew; Hosoi, Yoshihiko
2017-05-01
This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.
Information analysis of posterior canal afferents in the turtle, Trachemys scripta elegans.
Rowe, Michael H; Neiman, Alexander B
2012-01-24
We have used sinusoidal and band-limited Gaussian noise stimuli along with information measures to characterize the linear and non-linear responses of morpho-physiologically identified posterior canal (PC) afferents and to examine the relationship between mutual information rate and other physiological parameters. Our major findings are: 1) spike generation in most PC afferents is effectively a stochastic renewal process, and spontaneous discharges are fully characterized by their first order statistics; 2) a regular discharge, as measured by normalized coefficient of variation (cv*), reduces intrinsic noise in afferent discharges at frequencies below the mean firing rate; 3) coherence and mutual information rates, calculated from responses to band-limited Gaussian noise, are jointly determined by gain and intrinsic noise (discharge regularity), the two major determinants of signal to noise ratio in the afferent response; 4) measures of optimal non-linear encoding were only moderately greater than optimal linear encoding, indicating that linear stimulus encoding is limited primarily by internal noise rather than by non-linearities; and 5) a leaky integrate and fire model reproduces these results and supports the suggestion that the combination of high discharge regularity and high discharge rates serves to extend the linear encoding range of afferents to higher frequencies. These results provide a framework for future assessments of afferent encoding of signals generated during natural head movements and for comparison with coding strategies used by other sensory systems. This article is part of a Special Issue entitled: Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.
Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan
2016-01-01
In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740
LINEAR AND NONLINEAR CORRECTIONS IN THE RHIC INTERACTION REGIONS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
PILAT,F.; CAMERON,P.; PTITSYN,V.
2002-06-02
A method has been developed to measure operationally the linear and non-linear effects of the interaction region triplets, that gives access to the multipole content through the action kick, by applying closed orbit bumps and analysing tune and orbit shifts. This technique has been extensively tested and used during the RHIC operations in 2001. Measurements were taken at 3 different interaction regions and for different focusing at the interaction point. Non-linear effects up to the dodecapole have been measured as well as the effects of linear, sextupolar and octupolar corrections. An analysis package for the data processing has been developedmore » that through a precise fit of the experimental tune shift data (measured by a phase lock loop technique to better than 10{sup -5} resolution) determines the multipole content of an IR triplet.« less
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
2017-03-01
RECRUITING WITH THE NEW PLANNED RESOURCE OPTIMIZATION MODEL WITH EXPERIMENTAL DESIGN (PROM-WED) by Allison R. Hogarth March 2017 Thesis...with the New Planned Resource Optimization Model With Experimental Design (PROM-WED) 5. FUNDING NUMBERS 6. AUTHOR(S) Allison R. Hogarth 7. PERFORMING...has historically used a non -linear optimization model, the Planned Resource Optimization (PRO) model, to help inform decisions on the allocation of
Nonlinear Model Predictive Control for Cooperative Control and Estimation
NASA Astrophysics Data System (ADS)
Ru, Pengkai
Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.
[Developments in preparation and experimental method of solid phase microextraction fibers].
Yi, Xu; Fu, Yujie
2004-09-01
Solid phase microextraction (SPME) is a simple and effective adsorption and desorption technique, which concentrates volatile or nonvolatile compounds from liquid samples or headspace of samples. SPME is compatible with analyte separation and detection by gas chromatography, high performance liquid chromatography, and other instrumental methods. It can provide many advantages, such as wide linear scale, low solvent and sample consumption, short analytical times, low detection limits, simple apparatus, and so on. The theory of SPME is introduced, which includes equilibrium theory and non-equilibrium theory. The novel development of fiber preparation methods and relative experimental techniques are discussed. In addition to commercial fiber preparation, different newly developed fabrication techniques, such as sol-gel, electronic deposition, carbon-base adsorption, high-temperature epoxy immobilization, are presented. Effects of extraction modes, selection of fiber coating, optimization of operating conditions, method sensitivity and precision, and systematical automation, are taken into considerations in the analytical process of SPME. A simple perspective of SPME is proposed at last.
Stochastic optimal control of non-stationary response of a single-degree-of-freedom vehicle model
NASA Astrophysics Data System (ADS)
Narayanan, S.; Raju, G. V.
1990-09-01
An active suspension system to control the non-stationary response of a single-degree-of-freedom (sdf) vehicle model with variable velocity traverse over a rough road is investigated. The suspension is optimized with respect to ride comfort and road holding, using stochastic optimal control theory. The ground excitation is modelled as a spatial homogeneous random process, being the output of a linear shaping filter to white noise. The effect of the rolling contact of the tyre is considered by an additional filter in cascade. The non-stationary response with active suspension is compared with that of a passive system.
Automated design and optimization of flexible booster autopilots via linear programming, volume 1
NASA Technical Reports Server (NTRS)
Hauser, F. D.
1972-01-01
A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.
Status of the ATF Damping Ring BPM Upgrade Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briegel, C.; /Fermilab; Eddy, N.
2011-12-01
A substantial upgrade of the beam position monitors (BPM) at the ATF (Accelerator Test Facility) damping ring is currently in progress. Implementing digital read-out signal processing techniques in line with an optimized, low-noise analog downconverter, a resolution well below 1 mum could be demonstrated at 20 (of 96) upgraded BPM stations. The narrowband, high resolution BPM mode permits investigation of all types of non-linearities, imperfections and other obstacles in the machine which may limit the very low target aimed vertical beam emittance of < 2 pm. The technical status of the project, first beam measurements and an outlook to it'smore » finalization are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mensah, P.F.; Stubblefield, M.A.; Pang, S.S.
Thermal characterization of a prepreg fabric used as the bonding material to join composite pipes has been modeled and solved using finite difference modeling (FDM) numerical analysis technique for one dimensional heat transfer through the material. Temperature distributions within the composite pipe joint are predicted. The prepreg material has temperature dependent thermal properties. Thus the resulting boundary value equations are non linear and analytical solutions cannot be obtained. This characterization is pertinent in determining the temperature profile in the prepreg layer during the manufacturing process for optimization purposes. In addition, in order to assess the effects of induced thermal stressmore » in the joint, the temperature profile is needed. The methodology employed in this analysis compares favorably with data from experimentation.« less
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783
Designing with non-linear viscoelastic fluids
NASA Astrophysics Data System (ADS)
Schuh, Jonathon; Lee, Yong Hoon; Allison, James; Ewoldt, Randy
2017-11-01
Material design is typically limited to hard materials or simple fluids; however, design with more complex materials can provide ways to enhance performance. Using the Criminale-Ericksen-Filbey (CEF) constitutive model in the thin film lubrication limit, we derive a modified Reynolds Equation (based on asymptotic analysis) that includes shear thinning, first normal stress, and terminal regime viscoelastic effects. This allows for designing non-linear viscoelastic fluids in thin-film creeping flow scenarios, i.e. optimizing the shape of rheological material properties to achieve different design objectives. We solve the modified Reynolds equation using the pseudo-spectral method, and describe a case study in full-film lubricated sliding where optimal fluid properties are identified. These material-agnostic property targets can then guide formulation of complex fluids which may use polymeric, colloidal, or other creative approaches to achieve the desired non-Newtonian properties.
Tuning of PID controller using optimization techniques for a MIMO process
NASA Astrophysics Data System (ADS)
Thulasi dharan, S.; Kavyarasan, K.; Bagyaveereswaran, V.
2017-11-01
In this paper, two processes were considered one is Quadruple tank process and the other is CSTR (Continuous Stirred Tank Reactor) process. These are majorly used in many industrial applications for various domains, especially, CSTR in chemical plants.At first mathematical model of both the process is to be done followed by linearization of the system due to MIMO process and controllers are the major part to control the whole process to our desired point as per the applications so the tuning of the controller plays a major role among the whole process. For tuning of parameters we use two optimizations techniques like Particle Swarm Optimization, Genetic Algorithm. The above techniques are majorly used in different applications to obtain which gives the best among all, we use these techniques to obtain the best tuned values among many. Finally, we will compare the performance of the each process with both the techniques.
Modelling and control of a microgrid including photovoltaic and wind generation
NASA Astrophysics Data System (ADS)
Hussain, Mohammed Touseef
Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
Remarks on Hierarchic Control for a Linearized Micropolar Fluids System in Moving Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesus, Isaías Pereira de, E-mail: isaias@ufpi.edu.br
We study a Stackelberg strategy subject to the evolutionary linearized micropolar fluids equations in domains with moving boundaries, considering a Nash multi-objective equilibrium (non necessarily cooperative) for the “follower players” (as is called in the economy field) and an optimal problem for the leader player with approximate controllability objective. We will obtain the following main results: the existence and uniqueness of Nash equilibrium and its characterization, the approximate controllability of the linearized micropolar system with respect to the leader control and the existence and uniqueness of the Stackelberg–Nash problem, where the optimality system for the leader is given.
Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian
2014-09-01
Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.
Wang, Wansheng; Chen, Long; Zhou, Jie
2015-01-01
A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
NASA Astrophysics Data System (ADS)
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
Synthesis of concentric circular antenna arrays using dragonfly algorithm
NASA Astrophysics Data System (ADS)
Babayigit, B.
2018-05-01
Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
NASA Astrophysics Data System (ADS)
Brown, James; Seo, Dong-Jun
2010-05-01
Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.
Particle Streak Anemometry: A New Method for Proximal Flow Sensing from Aircraft
NASA Astrophysics Data System (ADS)
Nichols, T. W.
Accurate sensing of relative air flow direction from fixed-wing small unmanned aircraft (sUAS) is challenging with existing multi-hole pitot-static and vane systems. Sub-degree direction accuracy is generally not available on such systems and disturbances to the local flow field, induced by the airframe, introduce an additional error source. An optical imaging approach to make a relative air velocity measurement with high-directional accuracy is presented. Optical methods offer the capability to make a proximal measurement in undisturbed air outside of the local flow field without the need to place sensors on vulnerable probes extended ahead of the aircraft. Current imaging flow analysis techniques for laboratory use rely on relatively thin imaged volumes and sophisticated hardware and intensity thresholding in low-background conditions. A new method is derived and assessed using a particle streak imaging technique that can be implemented with low-cost commercial cameras and illumination systems, and can function in imaged volumes of arbitrary depth with complex background signal. The new technique, referred to as particle streak anemometry (PSA) (to differentiate from particle streak velocimetry which makes a field measurement rather than a single bulk flow measurement) utilizes a modified Canny Edge detection algorithm with a connected component analysis and principle component analysis to detect streak ends in complex imaging conditions. A linear solution for the air velocity direction is then implemented with a random sample consensus (RANSAC) solution approach. A single DOF non-linear, non-convex optimization problem is then solved for the air speed through an iterative approach. The technique was tested through simulation and wind tunnel tests yielding angular accuracies under 0.2 degrees, superior to the performance of existing commercial systems. Air speed error standard deviations varied from 1.6 to 2.2 m/s depending on the techniques of implementation. While air speed sensing is secondary to accurate flow direction measurement, the air speed results were in line with commercial pitot static systems at low speeds.
NASA Astrophysics Data System (ADS)
Seneviratne, Sashieka
With the growth of smart phones, the demand for more broadband, data centric technologies are being driven higher. As mobile operators worldwide plan and deploy 4th generation (4G) networks such as LTE to support the relentless growth in mobile data demand, the need for strategically positioned pico-sized cellular base stations known as 'pico-cells' are gaining traction. In addition to having to design a transceiver in a much compact footprint, pico-cells must still face the technical challenges presented by the new 4G systems, such as reduced power consumptions and linear amplification of the signals. The RF power amplifier (PA) that amplifies the output signals of 4G pico-cell systems face challenges to minimize size, achieve high average efficiencies and broader bandwidths while maintaining linearity and operating at higher frequencies. 4G standards as LTE use non-constant envelope modulation techniques with high peak to average ratios. Power amplifiers implemented in such applications are forced to operate at a backed off region from saturation. Therefore, in order to reduce power consumption, a design of a high efficiency PA that can maintain the efficiency for a wider range of radio frequency signals is required. The primary focus of this thesis is to enhance the efficiency of a compact RF amplifier suitable for a 4G pico-cell base station. For this aim, an integrated two way Doherty amplifier design in a compact 10mm x 11.5mm2 monolithic microwave integrated circuit using GaN device technology is presented. Using non-linear GaN HFETs models, the design achieves high effi-ciencies of over 50% at both back-off and peak power regions without compromising on the stringent linearity requirements of 4G LTE standards. This demonstrates a 17% increase in power added efficiency at 6 dB back off from peak power compared to conventional Class AB amplifier performance. Performance optimization techniques to select between high efficiency and high linearity operation are also presented. Overall, this thesis demonstrates the feasibility of an integrated HFET Doherty amplifier for LTE band 7 which entails the frequencies from 2.62-2.69GHz. The realization of the layout and various issues related to the PA design is discussed and attempted to be solved.
Lee, Chang Jun
2015-01-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.
Grid generation and adaptation via Monge-Kantorovich optimization in 2D and 3D
NASA Astrophysics Data System (ADS)
Delzanno, Gian Luca; Chacon, Luis; Finn, John M.
2008-11-01
In a recent paper [1], Monge-Kantorovich (MK) optimization was proposed as a method of grid generation/adaptation in two dimensions (2D). The method is based on the minimization of the L2 norm of grid point displacement, constrained to producing a given positive-definite cell volume distribution (equidistribution constraint). The procedure gives rise to the Monge-Amp'ere (MA) equation: a single, non-linear scalar equation with no free-parameters. The MA equation was solved in Ref. [1] with the Jacobian Free Newton-Krylov technique and several challenging test cases were presented in squared domains in 2D. Here, we extend the work of Ref. [1]. We first formulate the MK approach in physical domains with curved boundary elements and in 3D. We then show the results of applying it to these more general cases. We show that MK optimization produces optimal grids in which the constraint is satisfied numerically to truncation error. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, submitted to Journal of Computational Physics (2008).
Two alternative ways for solving the coordination problem in multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Two techniques for formulating the coupling between levels in multilevel optimization by linear decomposition, proposed as improvements over the original formulation, now several years old, that relied on explicit equality constraints which were shown by application experience as occasionally causing numerical difficulties. The two new techniques represent the coupling without using explicit equality constraints, thus avoiding the above diffuculties and also reducing computational cost of the procedure. The old and new formulations are presented in detail and illustrated by an example of a structural optimization. A generic version of the improved algorithm is also developed for applications to multidisciplinary systems not limited to structures.
Linear Quantum Systems: Non-Classical States and Robust Stability
2016-06-29
has a history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control ...information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE (DD...analysis and control of quantum linear systems and their interactions with non-classical quantum fields by developing control theoretic concepts exploiting
Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun
2012-01-01
How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.
Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun
2012-01-01
How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961
Kumar, M; Mishra, S K
2017-01-01
The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.
NASA Astrophysics Data System (ADS)
Sahoo, N. K.; Thakur, S.; Senthilkumar, M.; Das, N. C.
2005-02-01
Thickness-dependent index non-linearity in thin films has been a thought provoking as well as intriguing topic in the field of optical coatings. The characterization and analysis of such inhomogeneous index profiles pose several degrees of challenges to thin-film researchers depending upon the availability of relevant experimental and process-monitoring-related information. In the present work, a variety of novel experimental non-linear index profiles have been observed in thin films of MgOAl2O3ZrO2 ternary composites in solid solution under various electron-beam deposition parameters. Analysis and derivation of these non-linear spectral index profiles have been carried out by an inverse-synthesis approach using a real-time optical monitoring signal and post-deposition transmittance and reflection spectra. Most of the non-linear index functions are observed to fit polynomial equations of order seven or eight very well. In this paper, the application of such a non-linear index function has also been demonstrated in designing electric-field-optimized high-damage-threshold multilayer coatings such as normal- and oblique-incidence edge filters and a broadband beam splitter for p-polarized light. Such designs can also advantageously maintain the microstructural stability of the multilayer structure due to the low stress factor of the non-linear ternary composite layers.
Using artificial intelligence to predict permeability from petrographic data
NASA Astrophysics Data System (ADS)
Ali, Maqsood; Chawathé, Adwait
2000-10-01
Petrographic data collected during thin section analysis can be invaluable for understanding the factors that control permeability distribution. Reliable prediction of permeability is important for reservoir characterization. The petrographic elements (mineralogy, porosity types, cements and clays, and pore morphology) interact with each other uniquely to generate a specific permeability distribution. It is difficult to quantify accurately this interaction and its consequent effect on permeability, emphasizing the non-linear nature of the process. To capture these non-linear interactions, neural networks were used to predict permeability from petrographic data. The neural net was used as a multivariate correlative tool because of its ability to learn the non-linear relationships between multiple input and output variables. The study was conducted on the upper Queen formation called the Shattuck Member (Permian age). The Shattuck Member is composed of very fine-grained arkosic sandstone. The core samples were available from the Sulimar Queen and South Lucky Lake fields located in Chaves County, New Mexico. Nineteen petrographic elements were collected for each permeability value using a combined minipermeameter-petrographic technique. In order to reduce noise and overfitting the permeability model, these petrographic elements were screened, and their control (ranking) with respect to permeability was determined using fuzzy logic. Since the fuzzy logic algorithm provides unbiased ranking, it was used to reduce the dimensionality of the input variables. Based on the fuzzy logic ranking, only the most influential petrographic elements were selected as inputs for permeability prediction. The neural net was trained and tested using data from Well 1-16 in the Sulimar Queen field. Relying on the ranking obtained from the fuzzy logic analysis, the net was trained using the most influential three, five, and ten petrographic elements. A fast algorithm (the scaled conjugate gradient method) was used to optimize the network weight matrix. The net was then successfully used to predict the permeability in the nearby South Lucky Lake field, also in the Shattuck Member. This study underscored various important aspects of using neural networks as non-linear estimators. The neural network learnt the complex relationships between petrographic control and permeability. By predicting permeability in a remotely-located, yet geologically similar field, the generalizing capability of the neural network was also demonstrated. In old fields, where conventional petrographic analysis was routine, this technique may be used to supplement core permeability estimates.
NASA Astrophysics Data System (ADS)
Chandran, Senthilkumar; Paulraj, Rajesh; Ramasamy, P.
2017-05-01
Semi-organic lithium hydrogen oxalate monohydrate non-linear optical single crystals have been grown by slow evaporation solution growth technique at 35 °C. Single crystal X-ray diffraction study showed that the grown crystal belongs to the triclinic system with space group P1. The mechanical strength decreases with increasing load. The piezoelectric coefficient is found to be 1.41 pC/N. The nonlinear optical property was measured using Kurtz Perry powder technique and SHG efficiency was almost equal to that of KDP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov
2016-06-15
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less
Hager, Robert; Yoon, E. S.; Ku, S.; ...
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
An efficient multilevel optimization method for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.
1988-01-01
An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.
Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui
2017-01-01
A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.
Tam, James; Ahmad, Imad A Haidar; Blasko, Andrei
2018-06-05
A four parameter optimization of a stability indicating method for non-chromophoric degradation products of 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC), 1-stearoyl-sn-glycero-3-phosphocholine and 2-stearoyl-sn-glycero-3-phosphocholine was achieved using a reverse phase liquid chromatography-charged aerosol detection (RPLC-CAD) technique. Using the hydrophobic subtraction model of selectivity, a core-shell, polar embedded RPLC column was selected followed by gradient-temperature optimization, resulting in ideal relative peak placements for a robust, stability indicating separation. The CAD instrument parameters, power function value (PFV) and evaporator temperature were optimized for lysophosphatidylcholines to give UV absorbance detector-like linearity performance within a defined concentration range. The two lysophosphatidylcholines gave the same response factor in the selected conditions. System specific power function values needed to be set for the two RPLC-CAD instruments used. A custom flow-divert profile, sending only a portion of the column effluent to the detector, was necessary to mitigate detector response drifting effects. The importance of the PFV optimization for each instrument of identical build and how to overcome recovery issues brought on by the matrix effects from the lipid-RP stationary phase interaction is reported. Copyright © 2018 Elsevier B.V. All rights reserved.
Non-Linear Dynamics and Emergence in Laboratory Fusion Plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hnat, B.
2011-09-22
Turbulent behaviour of laboratory fusion plasma system is modelled using extended Hasegawa-Wakatani equations. The model is solved numerically using finite difference techniques. We discuss non-linear effects in such a system in the presence of the micro-instabilities, specifically a drift wave instability. We explore particle dynamics in different range of parameters and show that the transport changes from diffusive to non-diffusive when large directional flows are developed.
CORDIC-based digital signal processing (DSP) element for adaptive signal processing
NASA Astrophysics Data System (ADS)
Bolstad, Gregory D.; Neeld, Kenneth B.
1995-04-01
The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.
ERIC Educational Resources Information Center
Findorff, Irene K.
This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…
Five-Junction Solar Cell Optimization Using Silvaco Atlas
2017-09-01
experimental sources [1], [4], [6]. f. Numerical Method The method selected for solving the non -linear equations that make up the simulation can be...and maximize efficiency. Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco...Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco ATLAS is utilized to
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
New techniques for positron emission tomography in the study of human neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1992-07-01
The general goals of the physics and kinetic modeling projects are to: (1) improve the quantitative information extractable from PET images, and (2) develop, implement and optimize tracer kinetic models for new PET neurotransmitter/receptor ligands aided by computer simulations. Work towards improving PET quantification has included projects evaluating: (1) iterative reconstruction algorithms using supplemental boundary information, (2) automated registration of dynamic PET emission and transmission data using sinogram edge detection, and (3) automated registration of multiple subjects to a common coordinate system, including the use of non-linear warping methods. Simulation routines have been developed providing more accurate representation of datamore » generated from neurotransmitter/receptor studies. Routines consider data generated from complex compartmental models, high or low specific activity administrations, non-specific binding, pre- or post-injection of cold or competing ligands, temporal resolution of the data, and radiolabeled metabolites. Computer simulations and human PET studies have been performed to optimize kinetic models for four new neurotransmitter/receptor ligands, [{sup 11}C]TRB (muscarinic), [{sup 11}C]flumazenil (benzodiazepine), [{sup 18}F]GBR12909, (dopamine), and [{sup 11}C]NMPB (muscarinic).« less
New techniques for positron emission tomography in the study of human neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1992-01-01
The general goals of the physics and kinetic modeling projects are to: (1) improve the quantitative information extractable from PET images, and (2) develop, implement and optimize tracer kinetic models for new PET neurotransmitter/receptor ligands aided by computer simulations. Work towards improving PET quantification has included projects evaluating: (1) iterative reconstruction algorithms using supplemental boundary information, (2) automated registration of dynamic PET emission and transmission data using sinogram edge detection, and (3) automated registration of multiple subjects to a common coordinate system, including the use of non-linear warping methods. Simulation routines have been developed providing more accurate representation of datamore » generated from neurotransmitter/receptor studies. Routines consider data generated from complex compartmental models, high or low specific activity administrations, non-specific binding, pre- or post-injection of cold or competing ligands, temporal resolution of the data, and radiolabeled metabolites. Computer simulations and human PET studies have been performed to optimize kinetic models for four new neurotransmitter/receptor ligands, ({sup 11}C)TRB (muscarinic), ({sup 11}C)flumazenil (benzodiazepine), ({sup 18}F)GBR12909, (dopamine), and ({sup 11}C)NMPB (muscarinic).« less
NASA Technical Reports Server (NTRS)
Pavarini, C.
1974-01-01
Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.
PAPR reduction in FBMC using an ACE-based linear programming optimization
NASA Astrophysics Data System (ADS)
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.
Boon, K H; Khalil-Hani, M; Malarvili, M B
2018-01-01
This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Fleming, P.
1985-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.
NASA Astrophysics Data System (ADS)
Goyal, Deepak
Textile composites have a wide variety of applications in the aerospace, sports, automobile, marine and medical industries. Due to the availability of a variety of textile architectures and numerous parameters associated with each, optimal design through extensive experimental testing is not practical. Predictive tools are needed to perform virtual experiments of various options. The focus of this research is to develop a better understanding of linear elastic response, plasticity and material damage induced nonlinear behavior and mechanics of load flow in textile composites. Textile composites exhibit multiple scales of complexity. The various textile behaviors are analyzed using a two-scale finite element modeling. A framework to allow use of a wide variety of damage initiation and growth models is proposed. Plasticity induced non-linear behavior of 2x2 braided composites is investigated using a modeling approach based on Hill's yield function for orthotropic materials. The mechanics of load flow in textile composites is demonstrated using special non-standard postprocessing techniques that not only highlight the important details, but also transform the extensive amount of output data into comprehensible modes of behavior. The investigations show that the damage models differ from each other in terms of amount of degradation as well as the properties to be degraded under a particular failure mode. When compared with experimental data, predictions of some models match well for glass/epoxy composite whereas other's match well for carbon/epoxy composites. However, all the models predicted very similar response when damage factors were made similar, which shows that the magnitude of damage factors are very important. Full 3D as well as equivalent tape laminate predictions lie within the range of the experimental data for a wide variety of braided composites with different material systems, which validated the plasticity analysis. Conclusions about the effect of fiber type on the degree of plasticity induced non-linearity in a +/-25° braid depend on the measure of non-linearity. Investigations about the mechanics of load flow in textile composites bring new insights about the textile behavior. For example, the reasons for existence of transverse shear stress under uni-axial loading and occurrence of stress concentrations at certain locations were explained.
Galaxy Redshifts from Discrete Optimization of Correlation Functions
NASA Astrophysics Data System (ADS)
Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi
2016-12-01
We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.
Quad-rotor flight path energy optimization
NASA Astrophysics Data System (ADS)
Kemper, Edward
Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.
Nonlinear Control of a Reusable Rocket Engine for Life Extension
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Holmes, Michael S.; Ray, Asok
1998-01-01
This paper presents the conceptual development of a life-extending control system where the objective is to achieve high performance and structural durability of the plant. A life-extending controller is designed for a reusable rocket engine via damage mitigation in both the fuel (H2) and oxidizer (O2) turbines while achieving high performance for transient responses of the combustion chamber pressure and the O2/H2 mixture ratio. The design procedure makes use of a combination of linear and nonlinear controller synthesis techniques and also allows adaptation of the life-extending controller module to augment a conventional performance controller of the rocket engine. The nonlinear aspect of the design is achieved using non-linear parameter optimization of a prescribed control structure. Fatigue damage in fuel and oxidizer turbine blades is primarily caused by stress cycling during start-up, shutdown, and transient operations of a rocket engine. Fatigue damage in the turbine blades is one of the most serious causes for engine failure.
Noninvasive and fast measurement of blood glucose in vivo by near infrared (NIR) spectroscopy
NASA Astrophysics Data System (ADS)
Jintao, Xue; Liming, Ye; Yufei, Liu; Chunyan, Li; Han, Chen
2017-05-01
This research was to develop a method for noninvasive and fast blood glucose assay in vivo. Near-infrared (NIR) spectroscopy, a more promising technique compared to other methods, was investigated in rats with diabetes and normal rats. Calibration models are generated by two different multivariate strategies: partial least squares (PLS) as linear regression method and artificial neural networks (ANN) as non-linear regression method. The PLS model was optimized individually by considering spectral range, spectral pretreatment methods and number of model factors, while the ANN model was studied individually by selecting spectral pretreatment methods, parameters of network topology, number of hidden neurons, and times of epoch. The results of the validation showed the two models were robust, accurate and repeatable. Compared to the ANN model, the performance of the PLS model was much better, with lower root mean square error of validation (RMSEP) of 0.419 and higher correlation coefficients (R) of 96.22%.
Reference governors for controlled belt restraint systems
NASA Astrophysics Data System (ADS)
van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.
2010-07-01
Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.
Second Law of Thermodynamics Applied to Metabolic Networks
NASA Technical Reports Server (NTRS)
Nigam, R.; Liang, S.
2003-01-01
We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.
Phase-matched generation of coherent soft and hard X-rays using IR lasers
Popmintchev, Tenio V.; Chen, Ming-Chang; Bahabad, Alon; Murnane, Margaret M.; Kapteyn, Henry C.
2013-06-11
Phase-matched high-order harmonic generation of soft and hard X-rays is accomplished using infrared driving lasers in a high-pressure non-linear medium. The pressure of the non-linear medium is increased to multi-atmospheres and a mid-IR (or higher) laser device provides the driving pulse. Based on this scaling, also a general method for global optimization of the flux of phase-matched high-order harmonic generation at a desired wavelength is designed.
SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotoku, J; Nakabayashi, S; Kumagai, S
Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image.more » We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)« less
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud
2018-03-01
Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Scott A; Catalfamo, Simone; Brake, Matthew R. W.
2017-01-01
In the study of the dynamics of nonlinear systems, experimental measurements often convolute the response of the nonlinearity of interest and the effects of the experimental setup. To reduce the influence of the experimental setup on the deduction of the parameters of the nonlinearity, the response of a mechanical joint is investigated under various experimental setups. These experiments first focus on quantifying how support structures and measurement techniques affect the natural frequency and damping of a linear system. The results indicate that support structures created from bungees have negligible influence on the system in terms of frequency and damping ratiomore » variations. The study then focuses on the effects of the excitation technique on the response for a linear system. The findings suggest that thinner stingers should not be used, because under the high force requirements the stinger bending modes are excited adding unwanted torsional coupling. The optimal configuration for testing the linear system is then applied to a nonlinear system in order to assess the robustness of the test configuration. Finally, recommendations are made for conducting experiments on nonlinear systems using conventional/linear testing techniques.« less
Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.
2015-01-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349
Cotter, Meghan M; Whyms, Brian J; Kelly, Michael P; Doherty, Benjamin M; Gentry, Lindell R; Bersu, Edward T; Vorperian, Houri K
2015-08-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared with corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. © 2015 Wiley Periodicals, Inc.
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover
ERIC Educational Resources Information Center
Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike
2012-01-01
Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…
1982-12-21
and W. T. ZIEMBA (1981). Intro- duction to concave and generalized concave functions. In Gener- alized Concavity in Optimization and Economics (S...Schaible and W. T. Ziemba , eds.), pp. 21-50. Academic Press, New York. BANK, B., J. GUDDAT, D. KLATTE, B. KUMMER, and K. TAMMER (1982). Non- Linear
Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing
2017-04-20
The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.
NASA Astrophysics Data System (ADS)
Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru
2018-02-01
The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Wang, Yu; Zhang, Yaonan; Yao, Zhaomin; Zhao, Ruixue; Zhou, Fengfeng
2016-01-01
Non-lethal macular diseases greatly impact patients’ life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples. PMID:28018716
NASA Technical Reports Server (NTRS)
Devasia, Santosh
1996-01-01
A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
Quantum state engineering of light with continuous-wave optical parametric oscillators.
Morin, Olivier; Liu, Jianli; Huang, Kun; Barbosa, Felippe; Fabre, Claude; Laurat, Julien
2014-05-30
Engineering non-classical states of the electromagnetic field is a central quest for quantum optics(1,2). Beyond their fundamental significance, such states are indeed the resources for implementing various protocols, ranging from enhanced metrology to quantum communication and computing. A variety of devices can be used to generate non-classical states, such as single emitters, light-matter interfaces or non-linear systems(3). We focus here on the use of a continuous-wave optical parametric oscillator(3,4). This system is based on a non-linear χ(2) crystal inserted inside an optical cavity and it is now well-known as a very efficient source of non-classical light, such as single-mode or two-mode squeezed vacuum depending on the crystal phase matching. Squeezed vacuum is a Gaussian state as its quadrature distributions follow a Gaussian statistics. However, it has been shown that number of protocols require non-Gaussian states(5). Generating directly such states is a difficult task and would require strong χ(3) non-linearities. Another procedure, probabilistic but heralded, consists in using a measurement-induced non-linearity via a conditional preparation technique operated on Gaussian states. Here, we detail this generation protocol for two non-Gaussian states, the single-photon state and a superposition of coherent states, using two differently phase-matched parametric oscillators as primary resources. This technique enables achievement of a high fidelity with the targeted state and generation of the state in a well-controlled spatiotemporal mode.
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Morozova, Maria; Koschutnig, Karl; Klein, Elise; Wood, Guilherme
2016-01-15
Non-linear effects of age on white matter integrity are ubiquitous in the brain and indicate that these effects are more pronounced in certain brain regions at specific ages. Box-Cox analysis is a technique to increase the log-likelihood of linear relationships between variables by means of monotonic non-linear transformations. Here we employ Box-Cox transformations to flexibly and parsimoniously determine the degree of non-linearity of age-related effects on white matter integrity by means of model comparisons using a voxel-wise approach. Analysis of white matter integrity in a sample of adults between 20 and 89years of age (n=88) revealed that considerable portions of the white matter in the corpus callosum, cerebellum, pallidum, brainstem, superior occipito-frontal fascicle and optic radiation show non-linear effects of age. Global analyses revealed an increase in the average non-linearity from fractional anisotropy to radial diffusivity, axial diffusivity, and mean diffusivity. These results suggest that Box-Cox transformations are a useful and flexible tool to investigate more complex non-linear effects of age on white matter integrity and extend the functionality of the Box-Cox analysis in neuroimaging. Copyright © 2015 Elsevier Inc. All rights reserved.
Single-machine common/slack due window assignment problems with linear decreasing processing times
NASA Astrophysics Data System (ADS)
Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia
2017-08-01
This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.
Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M
2017-04-01
A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sum-of-Squares-Based Region of Attraction Analysis for Gain-Scheduled Three-Loop Autopilot
NASA Astrophysics Data System (ADS)
Seo, Min-Won; Kwon, Hyuck-Hoon; Choi, Han-Lim
2018-04-01
A conventional method of designing a missile autopilot is to linearize the original nonlinear dynamics at several trim points, then to determine linear controllers for each linearized model, and finally implement gain-scheduling technique. The validation of such a controller is often based on linear system analysis for the linear closed-loop system at the trim conditions. Although this type of gain-scheduled linear autopilot works well in practice, validation based solely on linear analysis may not be sufficient to fully characterize the closed-loop system especially when the aerodynamic coefficients exhibit substantial nonlinearity with respect to the flight condition. The purpose of this paper is to present a methodology for analyzing the stability of a gain-scheduled controller in a setting close to the original nonlinear setting. The method is based on sum-of-squares (SOS) optimization that can be used to characterize the region of attraction of a polynomial system by solving convex optimization problems. The applicability of the proposed SOS-based methodology is verified on a short-period autopilot of a skid-to-turn missile.
Electrochemical degradation, kinetics & performance studies of solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Das, Debanjan
Linear and Non-linear electrochemical characterization techniques and equivalent circuit modelling were carried out on miniature and sub-commercial Solid Oxide Fuel Cell (SOFC) stacks as an in-situ diagnostic approach to evaluate and analyze their performance under the presence of simulated alternative fuel conditions. The main focus of the study was to track the change in cell behavior and response live, as the cell was generating power. Electrochemical Impedance Spectroscopy (EIS) was the most important linear AC technique used for the study. The distinct effects of inorganic components usually present in hydrocarbon fuel reformates on SOFC behavior have been determined, allowing identification of possible "fingerprint" impedance behavior corresponding to specific fuel conditions and reaction mechanisms. Critical electrochemical processes and degradation mechanisms which might affect cell performance were identified and quantified. Sulfur and siloxane cause the most prominent degradation and the associated electrochemical cell parameters such as Gerisher and Warburg elements are applied respectively for better understanding of the degradation processes. Electrochemical Frequency Modulation (EFM) was applied for kinetic studies in SOFCs for the very first time for estimating the exchange current density and transfer coefficients. EFM is a non-linear in-situ electrochemical technique conceptually different from EIS and is used extensively in corrosion work, but rarely used on fuel cells till now. EFM is based on exploring information obtained from non-linear higher harmonic contributions from potential perturbations of electrochemical systems, otherwise not obtained by EIS. The baseline fuel used was 3 % humidified hydrogen with a 5-cell SOFC sub-commercial planar stack to perform the analysis. Traditional methods such as EIS and Tafel analysis were carried out at similar operating conditions to verify and correlate with the EFM data and ensure the validity of the obtained information. The obtained values closely range from around 11 mA cm-2 - 16 mA cm -2 with reasonable repeatability and excellent accuracy. The potential advantages of EFM compared to traditional methods were realized and our primary aim at demonstrating this technique on a SOFC system are presented which can act as a starting point for future research efforts in this area. Finally, an approach based on in-situ State of Health tests by EIS was formulated and investigated to understand the most efficient fuel conditions for suitable long term operation of a solid oxide fuel cell stack under power generation conditions. The procedure helped to reflect the individual effects of three most important fuel characteristics CO/H2 volumetric ratio, S/C ratio and fuel utilization under the presence of a simulated alternative fuel at 0.4 A cm-2. Variation tests helped to identify corresponding electrochemical/chemical processes, narrow down the most optimum operating regimes considering practical behavior of simulated reformer-SOFC system arrangements. At the end, 8 different combinations of the optimized parameters were tested long term with the stack, and the most efficient blend was determined.
Faruque, Imraan A; Muijres, Florian T; Macfarlane, Kenneth M; Kehlenbeck, Andrew; Humbert, J Sean
2018-06-01
This paper presents "optimal identification," a framework for using experimental data to identify the optimality conditions associated with the feedback control law implemented in the measurements. The technique compares closed loop trajectory measurements against a reduced order model of the open loop dynamics, and uses linear matrix inequalities to solve an inverse optimal control problem as a convex optimization that estimates the controller optimality conditions. In this study, the optimal identification technique is applied to two examples, that of a millimeter-scale micro-quadrotor with an engineered controller on board, and the example of a population of freely flying Drosophila hydei maneuvering about forward flight. The micro-quadrotor results show that the performance indices used to design an optimal flight control law for a micro-quadrotor may be recovered from the closed loop simulated flight trajectories, and the Drosophila results indicate that the combined effect of the insect longitudinal flight control sensing and feedback acts principally to regulate pitch rate.
NASA Astrophysics Data System (ADS)
J, Joy Sebastian Prakash; G, Vinitha; Ramachandran, Murugesan; Rajamanickam, Karunanithi
2017-10-01
Three different stabilizing agents, namely, L-cysteine, Thioglycolic acid and cysteamine hydrochloride were used to synthesize Cd(Zn)Se quantum dots (QDs). It was characterized using UV-vis spectroscopy, x-ray diffraction (XRD) and transmission electron microscopy (TEM). The non-linear optical properties (non-linear absorption and non-linear refraction) of synthesized Cd(Zn)Se quantum dots were studied with z-scan technique using diode pumped continuous wavelaser system at a wavelength of 532 nm. Our (organic) synthesized quantum dots showed optical properties similar to the inorganic materials reported elsewhere.
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
NASA Astrophysics Data System (ADS)
Morén, B.; Larsson, T.; Carlsson Tedgren, Å.
2018-03-01
High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabibian, A; Kim, A; Rose, J
Purpose: A novel optimization technique was developed for field-in-field (FIF) chestwall radiotherapy using bolus every other day. The dosimetry was compared to currently used optimization. Methods: The prior five patients treated at our clinic to the chestwall and supraclavicular nodes with a mono-isocentric four-field arrangement were selected for this study. The prescription was 5040 cGy in 28 fractions, 5 mm bolus every other day on the tangent fields, 6 and/or 10 MV x-rays, and multileaf collimation.Novelly, tangents FIF segments were forward planned optimized based on the composite bolus and non-bolus dose distribution simultaneously. The prescription was spilt into 14 fractionsmore » for both bolus and non-bolus tangents. The same segments and monitor units were used for the bolus and non-bolus treatment. The plan was optimized until the desired coverage was achieved, minimized 105% hotspots, and a maximum dose of less than 108%. Each tangential field had less than 5 segments.Comparison plans were generated using FIF optimization with the same dosimetric goals, but using only the non-bolus calculation for FIF optimization. The non-bolus fields were then copied and bolus was applied. The same segments and monitor units were used for the bolus and non-bolus segments. Results: The prescription coverage of the chestwall, as defined by RTOG guidelines, was on average 51.8% for the plans that optimized bolus and non-bolus treatments simultaneous (SB) and 43.8% for the plans optimized to the non-bolus treatments (NB). Chestwall coverage of 90% prescription averaged to 80.4% for SB and 79.6% for NB plans. The volume receiving 105% of the prescription was 1.9% for SB and 0.8% for NB plans on average. Conclusion: Simultaneously optimizing for bolus and non-bolus treatments noticeably improves prescription coverage of the chestwall while maintaining similar hotspots and 90% prescription coverage in comparison to optimizing only to non-bolus treatments.« less
NASA Astrophysics Data System (ADS)
Masian, Y.; Sivak, A.; Sevostianov, D.; Vassiliev, V.; Velichansky, V.
The paper shows the presents results of studies of small-size rubidium cells with argon and neon buffer gases, produced by a patent pended technique of laser welding [Fishman et al. (2014)]. Cells were designed for miniature frequency standard. Temperature dependence of the frequency of the coherent population trapping (CPT) resonance was measured and used to optimize the ratio of partial pressures of buffer gases. The influence of duration and regime of annealing on the CPT-resonance frequency drift was investigated. The parameters of the FM modulation of laser current for two cases which correspond to the highest amplitude of CPT resonance and to the smallest light shifts of the resonance frequency were determined. The temperature dependences of the CPT resonance frequency were found to be surprisingly different in the two cases. A non-linear dependence of CPT resonance frequency on the temperature of the cell with the two extremes was revealed for one of these cases.
Application of genetic algorithms to focal mechanism determination
NASA Astrophysics Data System (ADS)
Kobayashi, Reiji; Nakanishi, Ichiro
1994-04-01
Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.
USDA-ARS?s Scientific Manuscript database
Immunohistochemical (IHC) and immunofluorescent (IF) techniques were optimized for the detection of foot-and-mouth disease virus (FMDV) structural and non-structural proteins in frozen and paraformaldehyde-fixed paraffin embedded (PFPE) tissues of bovine and porcine origin. Immunohistochemical local...
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Chandrasekhar equations for infinite dimensional systems
NASA Technical Reports Server (NTRS)
Ito, K.; Powers, R. K.
1985-01-01
Chandrasekhar equations are derived for linear time invariant systems defined on Hilbert spaces using a functional analytic technique. An important consequence of this is that the solution to the evolutional Riccati equation is strongly differentiable in time and one can define a strong solution of the Riccati differential equation. A detailed discussion on the linear quadratic optimal control problem for hereditary differential systems is also included.
An Interactive Method to Solve Infeasibility in Linear Programming Test Assembling Models
ERIC Educational Resources Information Center
Huitzing, Hiddo A.
2004-01-01
In optimal assembly of tests from item banks, linear programming (LP) models have proved to be very useful. Assembly by hand has become nearly impossible, but these LP techniques are able to find the best solutions, given the demands and needs of the test to be assembled and the specifics of the item bank from which it is assembled. However,…
NASA Astrophysics Data System (ADS)
Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan
2016-12-01
In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.
Sharmin, Sifat; Glass, Kathryn; Viennet, Elvina; Harley, David
2018-04-01
Determining the relation between climate and dengue incidence is challenging due to under-reporting of disease and consequent biased incidence estimates. Non-linear associations between climate and incidence compound this. Here, we introduce a modelling framework to estimate dengue incidence from passive surveillance data while incorporating non-linear climate effects. We estimated the true number of cases per month using a Bayesian generalised linear model, developed in stages to adjust for under-reporting. A semi-parametric thin-plate spline approach was used to quantify non-linear climate effects. The approach was applied to data collected from the national dengue surveillance system of Bangladesh. The model estimated that only 2.8% (95% credible interval 2.7-2.8) of all cases in the capital Dhaka were reported through passive case reporting. The optimal mean monthly temperature for dengue transmission is 29℃ and average monthly rainfall above 15 mm decreases transmission. Our approach provides an estimate of true incidence and an understanding of the effects of temperature and rainfall on dengue transmission in Dhaka, Bangladesh.
Understanding a Normal Distribution of Data (Part 2).
Maltenfort, Mitchell
2016-02-01
Completing the discussion of data normality, advanced techniques for analysis of non-normal data are discussed including data transformation, Generalized Linear Modeling, and bootstrapping. Relative strengths and weaknesses of each technique are helpful in choosing a strategy, but help from a statistician is usually necessary to analyze non-normal data using these methods.
Performance evaluation of matrix gradient coils.
Jia, Feng; Schultz, Gerrit; Testud, Frederik; Welz, Anna Masako; Weber, Hans; Littin, Sebastian; Yu, Huijun; Hennig, Jürgen; Zaitsev, Maxim
2016-02-01
In this paper, we present a new performance measure of a matrix coil (also known as multi-coil) from the perspective of efficient, local, non-linear encoding without explicitly considering target encoding fields. An optimization problem based on a joint optimization for the non-linear encoding fields is formulated. Based on the derived objective function, a figure of merit of a matrix coil is defined, which is a generalization of a previously known resistive figure of merit for traditional gradient coils. A cylindrical matrix coil design with a high number of elements is used to illustrate the proposed performance measure. The results are analyzed to reveal novel features of matrix coil designs, which allowed us to optimize coil parameters, such as number of coil elements. A comparison to a scaled, existing multi-coil is also provided to demonstrate the use of the proposed performance parameter. The assessment of a matrix gradient coil profits from using a single performance parameter that takes the local encoding performance of the coil into account in relation to the dissipated power.
An efficient algorithm for function optimization: modified stem cells algorithm
NASA Astrophysics Data System (ADS)
Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi
2013-03-01
In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).
Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna Roberts
Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Robust L1-norm two-dimensional linear discriminant analysis.
Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang
2015-05-01
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.
Constraining the atmosphere of GJ 1214b using an optimal estimation technique
NASA Astrophysics Data System (ADS)
Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Fletcher, L. N.; Lee, J.-M.
2013-09-01
We explore cloudy, extended H2-He atmosphere scenarios for the warm super-Earth GJ 1214b using an optimal estimation retrieval technique. This planet, orbiting an M4.5 star only 13 pc from the Earth, is of particular interest because it lies between the Earth and Neptune in size and may be a member of a new class of planet that is neither terrestrial nor gas giant. Its relatively flat transmission spectrum has so far made atmospheric characterization difficult. The Non-linear optimal Estimator for MultivariateE spectral analySIS (NEMESIS) algorithm is used to explore the degenerate model parameter space for a cloudy, H2-He-dominated atmosphere scenario. Optimal estimation is a data-led approach that allows solutions beyond the range permitted by ab initio equilibrium model atmosphere calculations, and as such prevents restriction from prior expectations. We show that optimal estimation retrieval is a powerful tool for this kind of study, and present an exploration of the degenerate atmospheric scenarios for GJ 1214b. Whilst we find a family of solutions that provide a very good fit to the data, the quality and coverage of these data are insufficient for us to more precisely determine the abundances of cloud and trace gases given an H2-He atmosphere, and we also cannot rule out the possibility of a high molecular weight atmosphere. Future ground- and space-based observations will provide the opportunity to confirm or rule out an extended H2-He atmosphere, but more precise constraints will be limited by intrinsic degeneracies in the retrieval problem, such as variations in cloud top pressure and temperature.
Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System
NASA Astrophysics Data System (ADS)
Agarwal, Ruchi; Singh, Sanjeev
2017-12-01
The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-04-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-03-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
NASA Astrophysics Data System (ADS)
Teye, Ernest; Huang, Xingyi; Dai, Huang; Chen, Quansheng
2013-10-01
Quick, accurate and reliable technique for discrimination of cocoa beans according to geographical origin is essential for quality control and traceability management. This current study presents the application of Near Infrared Spectroscopy technique and multivariate classification for the differentiation of Ghana cocoa beans. A total of 194 cocoa bean samples from seven cocoa growing regions were used. Principal component analysis (PCA) was used to extract relevant information from the spectral data and this gave visible cluster trends. The performance of four multivariate classification methods: Linear discriminant analysis (LDA), K-nearest neighbors (KNN), Back propagation artificial neural network (BPANN) and Support vector machine (SVM) were compared. The performances of the models were optimized by cross validation. The results revealed that; SVM model was superior to all the mathematical methods with a discrimination rate of 100% in both the training and prediction set after preprocessing with Mean centering (MC). BPANN had a discrimination rate of 99.23% for the training set and 96.88% for prediction set. While LDA model had 96.15% and 90.63% for the training and prediction sets respectively. KNN model had 75.01% for the training set and 72.31% for prediction set. The non-linear classification methods used were superior to the linear ones. Generally, the results revealed that NIR Spectroscopy coupled with SVM model could be used successfully to discriminate cocoa beans according to their geographical origins for effective quality assurance.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Optimal Output Trajectory Redesign for Invertible Systems
NASA Technical Reports Server (NTRS)
Devasia, S.
1996-01-01
Given a desired output trajectory, inversion-based techniques find input-state trajectories required to exactly track the output. These inversion-based techniques have been successfully applied to the endpoint tracking control of multijoint flexible manipulators and to aircraft control. The specified output trajectory uniquely determines the required input and state trajectories that are found through inversion. These input-state trajectories exactly track the desired output; however, they might not meet acceptable performance requirements. For example, during slewing maneuvers of flexible structures, the structural deformations, which depend on the required state trajectories, may be unacceptably large. Further, the required inputs might cause actuator saturation during an exact tracking maneuver, for example, in the flight control of conventional takeoff and landing aircraft. In such situations, a compromise is desired between the tracking requirement and other goals such as reduction of internal vibrations and prevention of actuator saturation; the desired output trajectory needs to redesigned. Here, we pose the trajectory redesign problem as an optimization of a general quadratic cost function and solve it in the context of linear systems. The solution is obtained as an off-line prefilter of the desired output trajectory. An advantage of our technique is that the prefilter is independent of the particular trajectory. The prefilter can therefore be precomputed, which is a major advantage over other optimization approaches. Previous works have addressed the issue of preshaping inputs to minimize residual and in-maneuver vibrations for flexible structures; Since the command preshaping is computed off-line. Further minimization of optimal quadratic cost functions has also been previously use to preshape command inputs for disturbance rejection. All of these approaches are applicable when the inputs to the system are known a priori. Typically, outputs (not inputs) are specified in tracking problems, and hence the input trajectories have to be computed. The inputs to the system are however, difficult to determine for non-minimum phase systems like flexible structures. One approach to solve this problem is to (1) choose a tracking controller (the desired output trajectory is now an input to the closed-loop system and (2) redesign this input to the closed-loop system. Thus we effectively perform output redesign. These redesigns are however, dependent on the choice of the tracking controllers. Thus the controller optimization and trajectory redesign problems become coupled; this coupled optimization is still an open problem. In contrast, we decouple the trajectory redesign problem from the choice of feedback-based tracking controller. It is noted that our approach remains valid when a particular tracking controller is chosen. In addition, the formulation of our problem not only allows for the minimization of residual vibration as in available techniques but also allows for the optimal reduction fo vibrations during the maneuver, e.g., the altitude control of flexible spacecraft. We begin by formulating the optimal output trajectory redesign problem and then solve it in the context of general linear systems. This theory is then applied to an example flexible structure, and simulation results are provided.
An Optimized Integrator Windup Protection Technique Applied to a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Watts, Stephen R.; Garg, Sanjay
1995-01-01
This paper introduces a new technique for providing memoryless integrator windup protection which utilizes readily available optimization software tools. This integrator windup protection synthesis provides a concise methodology for creating integrator windup protection for each actuation system loop independently while assuring both controller and closed loop system stability. The individual actuation system loops' integrator windup protection can then be combined to provide integrator windup protection for the entire system. This technique is applied to an H(exp infinity) based multivariable control designed for a linear model of an advanced afterburning turbofan engine. The resulting transient characteristics are examined for the integrated system while encountering single and multiple actuation limits.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
Singular optimal control and the identically non-regular problem in the calculus of variations
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.
1985-01-01
A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.
Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?
ERIC Educational Resources Information Center
Ravinder, Handanhal V.
2013-01-01
A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
NASA Technical Reports Server (NTRS)
Dzielski, John Edward
1988-01-01
Recent developments in the area of nonlinear control theory have shown how coordiante changes in the state and input spaces can be used with nonlinear feedback to transform certain nonlinear ordinary differential equations into equivalent linear equations. These feedback linearization techniques are applied to resolve two problems arising in the control of spacecraft equipped with control moment gyroscopes (CMGs). The first application involves the computation of rate commands for the gimbals that rotate the individual gyroscopes to produce commanded torques on the spacecraft. The second application is to the long-term management of stored momentum in the system of control moment gyroscopes using environmental torques acting on the vehicle. An approach to distributing control effort among a group of redundant actuators is described that uses feedback linearization techniques to parameterize sets of controls which influence a specified subsystem in a desired way. The approach is adapted for use in spacecraft control with double-gimballed gyroscopes to produce an algorithm that avoids problematic gimbal configurations by approximating sets of gimbal rates that drive CMG rotors into desirable configurations. The momentum management problem is stated as a trajectory optimization problem with a nonlinear dynamical constraint. Feedback linearization and collocation are used to transform this problem into an unconstrainted nonlinear program. The approach to trajectory optimization is fast and robust. A number of examples are presented showing applications to the proposed NASA space station.
Two-point T1 measurement: wide-coverage optimizations by stochastic simulations.
Lin, M S; Fletcher, J W; Donati, R M
1986-08-01
Stochastic reliability of T1 measurement from image signal ratios is examined in the ideal case by stochastic simulations in the context of wide-coverage optimizations. Precise measurements prove to be accurate, and accurate ones precise. Sign-preserved inversion-recovery (IR)/non-IR techniques are the best ratio method, reciprocal non-IR/IR ones being equivalent, but inconvenient. Wide-coverage optima are relatively unsharp. Suggested guidelines for covering the 150- to 1500-ms T1 band are minimal relevant TE; TI about 400 ms; effective repetition times about in the ratio, TR2(IR)/TR1 (non-IR) = 2.5-3.0, and in a sum as long as possible up to about TR1 + TR2 = 3.5-4.0 s; signal-averaging after and only after TR1 + TR2 has been lengthened to the said region. Also suggested are different guidelines for covering T1 bands, 120-1200 and 200-1800 ms. Typically, precisions and accuracies improve linearly or faster with increasing S/N and (S/N)2, respectively. Unnecessarily high pixel resolutions or thin slicings exact great penalties in accuracies. Progressively shortening TR1 eventually transforms a wide coverage into a sharp targeting with small potential gains in a narrow T1 locality and large compromises almost everywhere else. The simulations yield an insight into applicabilities of standard error propagation analyses in two-point T1 measurement.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Postprocessing techniques for 3D non-linear structures
NASA Technical Reports Server (NTRS)
Gallagher, Richard S.
1987-01-01
How graphics postprocessing techniques are currently used to examine the results of 3-D nonlinear analyses, some new techniques which take advantage of recent technology, and how these results relate to both the finite element model and its geometric parent are reviewed.
Carvalho, Vitor Oliveira; Guimarães, Guilherme Veiga; Bocchi, Edimar Alcides
2008-01-01
BACKGROUND The relationship between the percentage of oxygen consumption reserve and percentage of heart rate reserve in heart failure patients either on non-optimized or off beta-blocker therapy is known to be unreliable. The aim of this study was to evaluate the relationship between the percentage of oxygen consumption reserve and percentage of heart rate reserve in heart failure patients receiving optimized and non-optimized beta-blocker treatment during a treadmill cardiopulmonary exercise test. METHODS A total of 27 sedentary heart failure patients (86% male, 50±12 years) on optimized beta-blocker therapy with a left ventricle ejection fraction of 33±8% and 35 sedentary non-optimized heart failure patients (75% male, 47±10 years) with a left ventricle ejection fraction of 30±10% underwent the treadmill cardiopulmonary exercise test (Naughton protocol). Resting and peak effort values of both the percentage of oxygen consumption reserve and percentage of heart rate reserve were, by definition, 0 and 100, respectively. RESULTS The heart rate slope for the non-optimized group was derived from the points 0.949±0.088 (0 intercept) and 1.055±0.128 (1 intercept), p<0.0001. The heart rate slope for the optimized group was derived from the points 1.026±0.108 (0 intercept) and 1.012±0.108 (1 intercept), p=0.47. Regression linear plots for the heart rate slope for each patient in the non-optimized and optimized groups revealed a slope of 0.986 (almost perfect) for the optimized group, but the regression analysis for the non-optimized group was 0.030 (far from perfect, which occurs at 1). CONCLUSION The relationship between the percentage of oxygen consumption reserve and percentage of heart rate reserve in patients on optimized beta-blocker therapy was reliable, but this relationship was unreliable in non-optimized heart failure patients. PMID:19060991
Estimating cosmic velocity fields from density fields and tidal tensors
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan
2012-10-01
In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.
NASA Astrophysics Data System (ADS)
Nuh, M. Z.; Nasir, N. F.
2017-08-01
Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.
NASA Astrophysics Data System (ADS)
Deng, R.; Davies, P.; Bajaj, A. K.
2003-05-01
A hereditary model and a fractional derivative model for the dynamic properties of flexible polyurethane foams used in automotive seat cushions are presented. Non-linear elastic and linear viscoelastic properties are incorporated into these two models. A polynomial function of compression is used to represent the non-linear elastic behavior. The viscoelastic property is modelled by a hereditary integral with a relaxation kernel consisting of two exponential terms in the hereditary model and by a fractional derivative term in the fractional derivative model. The foam is used as the only viscoelastic component in a foam-mass system undergoing uniaxial compression. One-term harmonic balance solutions are developed to approximate the steady state response of the foam-mass system to the harmonic base excitation. System identification procedures based on the direct non-linear optimization and a sub-optimal method are formulated to estimate the material parameters. The effects of the choice of the cost function, frequency resolution of data and imperfections in experiments are discussed. The system identification procedures are also applied to experimental data from a foam-mass system. The performances of the two models for data at different compression and input excitation levels are compared, and modifications to the structure of the fractional derivative model are briefly explored. The role of the viscous damping term in both types of model is discussed.
Optimal linear-quadratic control of coupled parabolic-hyperbolic PDEs
NASA Astrophysics Data System (ADS)
Aksikas, I.; Moghadam, A. Alizadeh; Forbes, J. F.
2017-10-01
This paper focuses on the optimal control design for a system of coupled parabolic-hypebolic partial differential equations by using the infinite-dimensional state-space description and the corresponding operator Riccati equation. Some dynamical properties of the coupled system of interest are analysed to guarantee the existence and uniqueness of the solution of the linear-quadratic (LQ)-optimal control problem. A state LQ-feedback operator is computed by solving the operator Riccati equation, which is converted into a set of algebraic and differential Riccati equations, thanks to the eigenvalues and the eigenvectors of the parabolic operator. The results are applied to a non-isothermal packed-bed catalytic reactor. The LQ-optimal controller designed in the early portion of the paper is implemented for the original nonlinear model. Numerical simulations are performed to show the controller performances.
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Issues in the digital implementation of control compensators. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moroney, P.
1979-01-01
Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.
Liu, Meiqin
2009-09-01
This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
NASA Astrophysics Data System (ADS)
Montealegre Rubio, Wilfredo; Paulino, Glaucio H.; Nelli Silva, Emilio Carlos
2011-02-01
Tailoring specified vibration modes is a requirement for designing piezoelectric devices aimed at dynamic-type applications. A technique for designing the shape of specified vibration modes is the topology optimization method (TOM) which finds an optimum material distribution inside a design domain to obtain a structure that vibrates according to specified eigenfrequencies and eigenmodes. Nevertheless, when the TOM is applied to dynamic problems, the well-known grayscale or intermediate material problem arises which can invalidate the post-processing of the optimal result. Thus, a more natural way for solving dynamic problems using TOM is to allow intermediate material values. This idea leads to the functionally graded material (FGM) concept. In fact, FGMs are materials whose properties and microstructure continuously change along a specific direction. Therefore, in this paper, an approach is presented for tailoring user-defined vibration modes, by applying the TOM and FGM concepts to design functionally graded piezoelectric transducers (FGPT) and non-piezoelectric structures (functionally graded structures—FGS) in order to achieve maximum and/or minimum vibration amplitudes at certain points of the structure, by simultaneously finding the topology and material gradation function. The optimization problem is solved by using sequential linear programming. Two-dimensional results are presented to illustrate the method.
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.
2016-10-01
Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.
IMNN: Information Maximizing Neural Networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
Comparison of time-series registration methods in breast dynamic infrared imaging
NASA Astrophysics Data System (ADS)
Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.
2015-03-01
Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.
Inflammatory activity in Crohn disease: ultrasound findings.
Migaleddu, Vincenzo; Quaia, Emilio; Scano, Domenico; Virgilio, Giuseppe
2008-01-01
Improvements in the ultrasound examination of bowel disease have registered in the last years the introduction of new technologies regarding high frequency probes (US), highly sensitive color or power Doppler units (CD-US), and the development of new non-linear technologies that optimize detection of contrast agents. Contrast-enhanced ultrasound (CE-US) most importantly increases the results in sonographic evaluation of Crohn disease inflammatory activity. CE-US has become an imaging modality routinely employed in the clinical practice for the evaluation of parenchymal organs due to the introduction of new generation microbubble contrast agents which persist in the bloodstream for several minutes after intravenous injection. The availability of high frequency dedicated contrast-specific US techniques provide accurate depiction of small bowel wall perfusion due to the extremely high sensitivity of non-linear signals produced by microbubble insonation. In Crohn's disease, CE-US may characterize the bowel wall thickness by differentiating fibrosis from edema and may grade the inflammatory disease activity by assessing the presence and distribution of vascularity within the layers of the bowel wall (submucosa alone or the entire bowel wall). Peri-intestinal inflammatory involvement can be also characterized. CE-US can provide prognostic data concerning clinical recurrence of the inflammatory disease and evaluate the efficacy of drugs treatments.
Detection and description of non-linear interdependence in normal multichannel human EEG data.
Breakspear, M; Terry, J R
2002-05-01
This study examines human scalp electroencephalographic (EEG) data for evidence of non-linear interdependence between posterior channels. The spectral and phase properties of those epochs of EEG exhibiting non-linear interdependence are studied. Scalp EEG data was collected from 40 healthy subjects. A technique for the detection of non-linear interdependence was applied to 2.048 s segments of posterior bipolar electrode data. Amplitude-adjusted phase-randomized surrogate data was used to statistically determine which EEG epochs exhibited non-linear interdependence. Statistically significant evidence of non-linear interactions were evident in 2.9% (eyes open) to 4.8% (eyes closed) of the epochs. In the eyes-open recordings, these epochs exhibited a peak in the spectral and cross-spectral density functions at about 10 Hz. Two types of EEG epochs are evident in the eyes-closed recordings; one type exhibits a peak in the spectral density and cross-spectrum at 8 Hz. The other type has increased spectral and cross-spectral power across faster frequencies. Epochs identified as exhibiting non-linear interdependence display a tendency towards phase interdependencies across and between a broad range of frequencies. Non-linear interdependence is detectable in a small number of multichannel EEG epochs, and makes a contribution to the alpha rhythm. Non-linear interdependence produces spatially distributed activity that exhibits phase synchronization between oscillations present at different frequencies. The possible physiological significance of these findings are discussed with reference to the dynamical properties of neural systems and the role of synchronous activity in the neocortex.
Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Schwartz, C. S.
2017-12-01
Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.
NASA Astrophysics Data System (ADS)
Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.
2014-11-01
IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.
NASA Technical Reports Server (NTRS)
Houts, R. C.; Burlage, D. W.
1972-01-01
A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.
Efficient dense blur map estimation for automatic 2D-to-3D conversion
NASA Astrophysics Data System (ADS)
Vosters, L. P. J.; de Haan, G.
2012-03-01
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Bayesian integration and non-linear feedback control in a full-body motor task.
Stevenson, Ian H; Fernandes, Hugo L; Vilares, Iris; Wei, Kunlin; Körding, Konrad P
2009-12-01
A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots) on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter) is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller). We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task.
Muthukkumaran, A; Aravamudan, K
2017-12-15
Adsorption, a popular technique for removing azo dyes from aqueous streams, is influenced by several factors such as pH, initial dye concentration, temperature and adsorbent dosage. Any strategy that seeks to identify optimal conditions involving these factors, should take into account both kinetic and equilibrium aspects since they influence rate and extent of removal by adsorption. Hence rigorous kinetics and accurate equilibrium models are required. In this work, the experimental investigations pertaining to adsorption of acid orange 10 dye (AO10) on activated carbon were carried out using Central Composite Design (CCD) strategy. The significant factors that affected adsorption were identified to be solution temperature, solution pH, adsorbent dosage and initial solution concentration. Thermodynamic analysis showed the endothermic nature of the dye adsorption process. The kinetics of adsorption has been rigorously modeled using the Homogeneous Surface Diffusion Model (HSDM) after incorporating the non-linear Freundlich adsorption isotherm. Optimization was performed for kinetic parameters (color removal time and surface diffusion coefficient) as well as the equilibrium affected response viz. percentage removal. Finally, the optimum conditions predicted were experimentally validated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
Inference with minimal Gibbs free energy in information field theory.
Ensslin, Torsten A; Weig, Cornelius
2010-11-01
Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.
Farrar, Christian T; Dai, Guangping; Novikov, Mikhail; Rosenzweig, Anthony; Weissleder, Ralph; Rosen, Bruce R; Sosnovik, David E
2008-06-01
Off-resonance imaging (ORI) techniques are being increasingly used to image iron oxide imaging agents such as monocrystalline iron oxide nanoparticles (MION). However, the diagnostic accuracy, linearity, and field dependence of ORI have not been fully characterized. In this study, the sensitivity, specificity, and linearity of ORI were thus examined as a function of both MION concentration and magnetic field strength (4.7 and 14 T). MION phantoms with and without an air interface as well as MION uptake in a mouse model of healing myocardial infarction were imaged. MION-induced resonance shifts were shown to increase linearly with MION concentration. In contrast, the ORI signal/sensitivity was highly non-linear, initially increasing with MION concentration until T2 became comparable to the TE and decreasing thereafter. The specificity of ORI to distinguish MION-induced resonance shifts from on-resonance water was found to decrease with increasing field because of the increased on-resonance water linewidths (15 Hz at 4.7 T versus 45 Hz at 14 T). Large resonance shifts ( approximately 300 Hz) were observed at air interfaces at 4.7 T, both in vitro and in vivo, and led to poor ORI specificity for MION concentrations less than 150 microg Fe/mL. The in vivo ORI sensitivity was sufficient to detect the accumulation of MION in macrophages infiltrating healing myocardial infarcts, but the specificity was limited by non-specific areas of positive contrast at the air/tissue interfaces of the thoracic wall and the descending aorta. Improved specificity and linearity can, however, be expected at lower fields where decreased on-resonance water linewidths, reduced air-induced resonance shifts, and longer T2 relaxation times are observed. The optimal performance of ORI will thus likely be seen at low fields, with moderate MION concentrations and with sequences containing very short TEs. Copyright (c) 2007 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Venkata, Santhosh Krishnan; Roy, Binoy Krishna
2016-03-01
Design of an intelligent flow measurement technique using venturi flow meter is reported in this paper. The objectives of the present work are: (1) to extend the linearity range of measurement to 100 % of full scale input range, (2) to make the measurement technique adaptive to variations in discharge coefficient, diameter ratio of venturi nozzle and pipe (β), liquid density, and liquid temperature, and (3) to achieve the objectives (1) and (2) using an optimized neural network. The output of venturi flow meter is differential pressure. It is converted to voltage by using a suitable data conversion unit. A suitable optimized artificial neural network (ANN) is added, in place of conventional calibration circuit. ANN is trained, tested with simulated data considering variations in discharge coefficient, diameter ratio between venturi nozzle and pipe, liquid density, and liquid temperature. The proposed technique is then subjected to practical data for validation. Results show that the proposed technique has fulfilled the objectives.
Application of higher-order cepstral techniques in problems of fetal heart signal extraction
NASA Astrophysics Data System (ADS)
Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.
1996-10-01
Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.
Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques
NASA Technical Reports Server (NTRS)
Banks, H. T.; Wang, C.
1989-01-01
A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.
NASA Astrophysics Data System (ADS)
Pipkins, Daniel Scott
Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.
Optimal energy growth in a stably stratified shear flow
NASA Astrophysics Data System (ADS)
Jose, Sharath; Roy, Anubhab; Bale, Rahul; Iyer, Krithika; Govindarajan, Rama
2018-02-01
Transient growth of perturbations by a linear non-modal evolution is studied here in a stably stratified bounded Couette flow. The density stratification is linear. Classical inviscid stability theory states that a parallel shear flow is stable to exponentially growing disturbances if the Richardson number (Ri) is greater than 1/4 everywhere in the flow. Experiments and numerical simulations at higher Ri show however that algebraically growing disturbances can lead to transient amplification. The complexity of a stably stratified shear flow stems from its ability to combine this transient amplification with propagating internal gravity waves (IGWs). The optimal perturbations associated with maximum energy amplification are numerically obtained at intermediate Reynolds numbers. It is shown that in this wall-bounded flow, the three-dimensional optimal perturbations are oblique, unlike in unstratified flow. A partitioning of energy into kinetic and potential helps in understanding the exchange of energies and how it modifies the transient growth. We show that the apportionment between potential and kinetic energy depends, in an interesting manner, on the Richardson number, and on time, as the transient growth proceeds from an optimal perturbation. The oft-quoted stabilizing role of stratification is also probed in the non-diffusive limit in the context of disturbance energy amplification.
Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles
Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631
Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Optimization of freeform surfaces using intelligent deformation techniques for LED applications
NASA Astrophysics Data System (ADS)
Isaac, Annie Shalom; Neumann, Cornelius
2018-04-01
For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.
NASA Astrophysics Data System (ADS)
Hu, K. M.; Li, Hua
2018-07-01
A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.
Remote detection of electronic devices
Judd, Stephen L [Los Alamos, NM; Fortgang, Clifford M [Los Alamos, NM; Guenther, David C [Los Alamos, NM
2012-09-25
An apparatus and method for detecting solid-state electronic devices are described. Non-linear junction detection techniques are combined with spread-spectrum encoding and cross correlation to increase the range and sensitivity of the non-linear junction detection and to permit the determination of the distances of the detected electronics. Nonlinear elements are detected by transmitting a signal at a chosen frequency and detecting higher harmonic signals that are returned from responding devices.
NASA Astrophysics Data System (ADS)
Moroni, Giovanni; Syam, Wahyudin P.; Petrò, Stefano
2014-08-01
Product quality is a main concern today in manufacturing; it drives competition between companies. To ensure high quality, a dimensional inspection to verify the geometric properties of a product must be carried out. High-speed non-contact scanners help with this task, by both speeding up acquisition speed and increasing accuracy through a more complete description of the surface. The algorithms for the management of the measurement data play a critical role in ensuring both the measurement accuracy and speed of the device. One of the most fundamental parts of the algorithm is the procedure for fitting the substitute geometry to a cloud of points. This article addresses this challenge. Three relevant geometries are selected as case studies: a non-linear least-squares fitting of a circle, sphere and cylinder. These geometries are chosen in consideration of their common use in practice; for example the sphere is often adopted as a reference artifact for performance verification of a coordinate measuring machine (CMM) and a cylinder is the most relevant geometry for a pin-hole relation as an assembly feature to construct a complete functioning product. In this article, an improvement of the initial point guess for the Levenberg-Marquardt (LM) algorithm by employing a chaos optimization (CO) method is proposed. This causes a performance improvement in the optimization of a non-linear function fitting the three geometries. The results show that, with this combination, a higher quality of fitting results a smaller norm of the residuals can be obtained while preserving the computational cost. Fitting an ‘incomplete-point-cloud’, which is a situation where the point cloud does not cover a complete feature e.g. from half of the total part surface, is also investigated. Finally, a case study of fitting a hemisphere is presented.
Progress Towards Improved Analysis of TES X-ray Data Using Principal Component Analysis
NASA Technical Reports Server (NTRS)
Busch, S. E.; Adams, J. S.; Bandler, S. R.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Fixsen, D. J.; Kelley, R. L.; Kilbourne, C. A.; Lee, S.-J.;
2015-01-01
The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an 55Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.
Stanciu, Stefan G; Xu, Shuoyu; Peng, Qiwen; Yan, Jie; Stanciu, George A; Welsch, Roy E; So, Peter T C; Csucs, Gabor; Yu, Hanry
2014-04-10
The accurate staging of liver fibrosis is of paramount importance to determine the state of disease progression, therapy responses, and to optimize disease treatment strategies. Non-linear optical microscopy techniques such as two-photon excitation fluorescence (TPEF) and second harmonic generation (SHG) can image the endogenous signals of tissue structures and can be used for fibrosis assessment on non-stained tissue samples. While image analysis of collagen in SHG images was consistently addressed until now, cellular and tissue information included in TPEF images, such as inflammatory and hepatic cell damage, equally important as collagen deposition imaged by SHG, remain poorly exploited to date. We address this situation by experimenting liver fibrosis quantification and scoring using a combined approach based on TPEF liver surface imaging on a Thioacetamide-induced rat model and a gradient based Bag-of-Features (BoF) image classification strategy. We report the assessed performance results and discuss the influence of specific BoF parameters to the performance of the fibrosis scoring framework.
Stanciu, Stefan G.; Xu, Shuoyu; Peng, Qiwen; Yan, Jie; Stanciu, George A.; Welsch, Roy E.; So, Peter T. C.; Csucs, Gabor; Yu, Hanry
2014-01-01
The accurate staging of liver fibrosis is of paramount importance to determine the state of disease progression, therapy responses, and to optimize disease treatment strategies. Non-linear optical microscopy techniques such as two-photon excitation fluorescence (TPEF) and second harmonic generation (SHG) can image the endogenous signals of tissue structures and can be used for fibrosis assessment on non-stained tissue samples. While image analysis of collagen in SHG images was consistently addressed until now, cellular and tissue information included in TPEF images, such as inflammatory and hepatic cell damage, equally important as collagen deposition imaged by SHG, remain poorly exploited to date. We address this situation by experimenting liver fibrosis quantification and scoring using a combined approach based on TPEF liver surface imaging on a Thioacetamide-induced rat model and a gradient based Bag-of-Features (BoF) image classification strategy. We report the assessed performance results and discuss the influence of specific BoF parameters to the performance of the fibrosis scoring framework. PMID:24717650
NASA Astrophysics Data System (ADS)
Stanciu, Stefan G.; Xu, Shuoyu; Peng, Qiwen; Yan, Jie; Stanciu, George A.; Welsch, Roy E.; So, Peter T. C.; Csucs, Gabor; Yu, Hanry
2014-04-01
The accurate staging of liver fibrosis is of paramount importance to determine the state of disease progression, therapy responses, and to optimize disease treatment strategies. Non-linear optical microscopy techniques such as two-photon excitation fluorescence (TPEF) and second harmonic generation (SHG) can image the endogenous signals of tissue structures and can be used for fibrosis assessment on non-stained tissue samples. While image analysis of collagen in SHG images was consistently addressed until now, cellular and tissue information included in TPEF images, such as inflammatory and hepatic cell damage, equally important as collagen deposition imaged by SHG, remain poorly exploited to date. We address this situation by experimenting liver fibrosis quantification and scoring using a combined approach based on TPEF liver surface imaging on a Thioacetamide-induced rat model and a gradient based Bag-of-Features (BoF) image classification strategy. We report the assessed performance results and discuss the influence of specific BoF parameters to the performance of the fibrosis scoring framework.
A controls engineering approach for analyzing airplane input-output characteristics
NASA Technical Reports Server (NTRS)
Arbuckle, P. Douglas
1991-01-01
An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.
Optimal fabrication processes for unidirectional metal-matrix composites: A computational simulation
NASA Technical Reports Server (NTRS)
Saravanos, D. A.; Murthy, P. L. N.; Morel, M.
1990-01-01
A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with non-linear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.
NASA Astrophysics Data System (ADS)
Bordovsky, Michal; Catrysse, Peter; Dods, Steven; Freitas, Marcio; Klein, Jackson; Kotacka, Libor; Tzolov, Velko; Uzunov, Ivan M.; Zhang, Jiazong
2004-05-01
We present the state of the art for commercial design and simulation software in the 'front end' of photonic circuit design. One recent advance is to extend the flexibility of the software by using more than one numerical technique on the same optical circuit. There are a number of popular and proven techniques for analysis of photonic devices. Examples of these techniques include the Beam Propagation Method (BPM), the Coupled Mode Theory (CMT), and the Finite Difference Time Domain (FDTD) method. For larger photonic circuits, it may not be practical to analyze the whole circuit by any one of these methods alone, but often some smaller part of the circuit lends itself to at least one of these standard techniques. Later the whole problem can be analyzed on a unified platform. This kind of approach can enable analysis for cases that would otherwise be cumbersome, or even impossible. We demonstrate solutions for more complex structures ranging from the sub-component layout, through the entire device characterization, to the mask layout and its editing. We also present recent advances in the above well established techniques. This includes the analysis of nano-particles, metals, and non-linear materials by FDTD, photonic crystal design and analysis, and improved models for high concentration Er/Yb co-doped glass waveguide amplifiers.
Integrated Sensing Processor, Phase 2
2005-12-01
performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal
Constrained optimization of image restoration filters
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1973-01-01
A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.
Geometry-based ensembles: toward a structural characterization of the classification boundary.
Pujol, Oriol; Masip, David
2009-06-01
This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.
Influence of a Levelness Defect in a Thrust Bearing on the Dynamic Behaviour of AN Elastic Shaft
NASA Astrophysics Data System (ADS)
BERGER, S.; BONNEAU, O.; FRÊNE, J.
2002-01-01
This paper examines the non-linear dynamic behaviour of a flexible shaft. The shaft is mounted on two journal bearings and the axial load is supported by a defective hydrodynamic thrust bearing at one end. The defect is a levelness defect of the rotor. The thrust bearing behaviour must be considered to be non-linear because of the effects of the defect. The shaft is modelled with typical beam finite elements including effects such as the gyroscopic effects. A modal technique is used to reduce the number of degrees of freedom. Results show that the thrust bearing defects introduce supplementary critical speeds. The linear approach is unable to show the supplementary critical speeds which are obtained only by using non-linear analysis.
Soft tissue modelling through autowaves for surgery simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian
2006-09-01
Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.
NASA Astrophysics Data System (ADS)
Dar, Aasif Bashir; Jha, Rakesh Kumar
2017-03-01
Various dispersion compensation units are presented and evaluated in this paper. These dispersion compensation units include dispersion compensation fiber (DCF), DCF merged with fiber Bragg grating (FBG) (joint technique), and linear, square root, and cube root chirped tanh apodized FBG. For the performance evaluation 10 Gb/s NRZ transmission system over 100-km-long single-mode fiber is used. The three chirped FBGs are optimized individually to yield pulse width reduction percentage (PWRP) of 86.66, 79.96, 62.42% for linear, square root, and cube root, respectively. The DCF and Joint technique both provide a remarkable PWRP of 94.45 and 96.96%, respectively. The performance of optimized linear chirped tanh apodized FBG and DCF is compared for long-haul transmission system on the basis of quality factor of received signal. For both the systems maximum transmission distance is calculated such that quality factor is ≥ 6 at the receiver and result shows that performance of FBG is comparable to that of DCF with advantages of very low cost, small size and reduced nonlinear effects.
USDA-ARS?s Scientific Manuscript database
Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...
TH-EF-BRB-04: 4π Dynamic Conformal Arc Therapy Dynamic Conformal Arc Therapy (DCAT) for SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, T; Long, T; Tian, Z.
2016-06-15
Purpose: To develop an efficient and effective trajectory optimization methodology for 4π dynamic conformal arc treatment (4π DCAT) with synchronized gantry and couch motion; and to investigate potential clinical benefits for stereotactic body radiation therapy (SBRT) to breast, lung, liver and spine tumors. Methods: The entire optimization framework for 4π DCAT inverse planning consists of two parts: 1) integer programming algorithm and 2) particle swarm optimization (PSO) algorithm. The integer programming is designed to find an optimal solution for arc delivery trajectory with both couch and gantry rotation, while PSO minimize a non-convex objective function based on the selected trajectorymore » and dose-volume constraints. In this study, control point interaction is explicitly taken into account. Beam trajectory was modeled as a series of control points connected together to form a deliverable path. With linear treatment planning objectives, a mixed-integer program (MIP) was formulated. Under mild assumptions, the MIP is tractable. Assigning monitor units to control points along the path can be integrated into the model and done by PSO. The developed 4π DCAT inverse planning strategy is evaluated on SBRT cases and compared to clinically treated plans. Results: The resultant dose distribution of this technique was evaluated between 3D conformal treatment plan generated by Pinnacle treatment planning system and 4π DCAT on a lung SBRT patient case. Both plans share the same scale of MU, 3038 and 2822 correspondingly to 3D conformal plan and 4π DCAT. The mean doses for most of OARs were greatly reduced at 32% (cord), 70% (esophagus), 2.8% (lung) and 42.4% (stomach). Conclusion: Initial results in this study show the proposed 4π DCAT treatment technique can achieve better OAR sparing and lower MUs, which indicates that the developed technique is promising for high dose SBRT to reduce the risk of secondary cancer.« less
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
Non-linear multi-objective model for planning water-energy modes of Novosibirsk Hydro Power Plant
NASA Astrophysics Data System (ADS)
Alsova, O. K.; Artamonova, A. V.
2018-05-01
This paper presents a non-linear multi-objective model for planning and optimizing of water-energy modes for the Novosibirsk Hydro Power Plant (HPP) operation. There is a very important problem of developing a strategy to improve the scheme of water-power modes and ensure the effective operation of hydropower plants. It is necessary to determine the methods and criteria for the optimal distribution of water resources, to develop a set of models and to apply them to the software implementation of a DSS (decision-support system) for managing Novosibirsk HPP modes. One of the possible versions of the model is presented and investigated in this paper. Experimental study of the model has been carried out with 2017 data and the task of ten-day period planning from April to July (only 12 ten-day periods) was solved.
NASA Astrophysics Data System (ADS)
Serrat-Capdevila, A.; Valdes, J. B.
2005-12-01
An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.
Kuldeep, B; Singh, V K; Kumar, A; Singh, G K
2015-01-01
In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, B.; Polizzi, E.
2013-05-01
The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.
Non-normal perturbation growth in idealised island and headland wakes
NASA Astrophysics Data System (ADS)
Aiken, C. M.; Moore, A. M.; Middleton, J. H.
2003-12-01
Generalised linear stability theory is used to calculate the linear perturbations that furnish most rapid growth in energy in a model of a steady recirculating island wake. This optimal peturbation is found to be antisymmetric and to evolve into a von Kármán vortex street. Eigenanalysis of the linearised system reveals that the eigenmodes corresponding to vortex sheet formation are damped, so the growth of the perturbation is understood through the non-normality of the linearised system. Qualitatively similar perturbation growth is shown to occur in a non-linear model of stochastically-forced subcritical flow, resulting in transition to an unsteady wake. Free-stream variability with amplitude 8% of the mean inflow speed sustains vortex street structures in the non-linear model with perturbation velocities the order of the inflow speed, suggesting that environmental stochastic forcing may similarly be capable of exciting growing disturbances in real island wakes. To support this, qualitatively similar perturbation growth is demonstrated in the straining wake of a realistic island obstacle. It is shown that for the case of an idealised headland, where the vortex street eigenmodes are lacking, vortex sheets are produced through a similar non-normal process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarmand, H; Winey, B; Craft, D
2014-06-15
Purpose: To efficiently find quality-guaranteed treatment plans with the minimum number of beams for stereotactic body radiation therapy using RayStation. Methods: For a pre-specified pool of candidate beams we use RayStation (a treatment planning software for clinical use) to identify the deliverable plan which uses all the beams with the minimum dose to organs at risk (OARs) and dose to the tumor and other structures in specified ranges. Then use the dose matrix information for the generated apertures from RayStation to solve a linear program to find the ideal plan with the same objective and constraints allowing use of allmore » beams. Finally we solve a mixed integer programming formulation of the beam angle optimization problem (BAO) with the objective of minimizing the number of beams while remaining in a predetermined epsilon-optimality of the ideal plan with respect to the dose to OARs. Since the treatment plan optimization is a multicriteria optimization problem, the planner can exploit the multicriteria optimization capability of RayStation to navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus OARs sparing, and then use the proposed technique to reduce the number of beams while guaranteeing quality. For the numerical experiments two liver cases and one lung case with 33 non-coplanar beams are considered. Results: The ideal plan uses an impractically large number of beams. The proposed technique reduces the number of beams to the range of practical application (5 to 9 beams) while remaining in the epsilon-optimal range of 1% to 5% optimality gap. Conclusion: The proposed method can be integrated into a general algorithm for fast navigation of the ideal dose distribution Pareto surface and finding the treatment plan with the minimum number of beams, which corresponds to the delivery time, in epsilon-optimality range of the desired ideal plan. The project was supported by the Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center and partially by RaySearch Laboratories.« less
NASA Astrophysics Data System (ADS)
Petrov, Dimitar; Michielsen, Koen; Cockmartin, Lesley; Zhang, Gouzhi; Young, Kenneth; Marshall, Nicholas; Bosmans, Hilde
2016-03-01
Digital breast tomosynthesis (DBT) is a 3D mammography technique that promises better visualization of low contrast lesions than conventional 2D mammography. A wide range of parameters influence the diagnostic information in DBT images and a systematic means of DBT system optimization is needed. The gold standard for image quality assessment is to perform a human observer experiment with experienced readers. Using human observers for optimization is time consuming and not feasible for the large parameter space of DBT. Our goal was to develop a model observer (MO) that can predict human reading performance for standard detection tasks of target objects within a structured phantom and subsequently apply it in a first comparative study. The phantom consists of an acrylic semi-cylindrical container with acrylic spheres of different sizes and the remaining space filled with water. Three types of lesions were included: 3D printed spiculated and non-spiculated mass lesions along with calcification groups. The images of the two mass lesion types were reconstructed with 3 different reconstruction methods (FBP, FBP with SRSAR, MLTRpr) and read by human readers. A Channelized Hotelling model observer was created for the non-spiculated lesion detection task using five Laguerre-Gauss channels, tuned for better performance. For the non-spiculated mass lesions a linear relation between the MO and human observer results was found, with correlation coefficients of 0.956 for standard FBP, 0.998 for FBP with SRSAR and 0.940 for MLTRpr. Both the MO and human observer percentage correct results for the spiculated masses were close to 100%, and showed no difference from each other for every reconstruction algorithm.
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
Linear and nonlinear stability of the Blasius boundary layer
NASA Technical Reports Server (NTRS)
Bertolotti, F. P.; Herbert, TH.; Spalart, P. R.
1992-01-01
Two new techniques for the study of the linear and nonlinear instability in growing boundary layers are presented. The first technique employs partial differential equations of parabolic type exploiting the slow change of the mean flow, disturbance velocity profiles, wavelengths, and growth rates in the streamwise direction. The second technique solves the Navier-Stokes equation for spatially evolving disturbances using buffer zones adjacent to the inflow and outflow boundaries. Results of both techniques are in excellent agreement. The linear and nonlinear development of Tollmien-Schlichting (TS) waves in the Blasius boundary layer is investigated with both techniques and with a local procedure based on a system of ordinary differential equations. The results are compared with previous work and the effects of non-parallelism and nonlinearity are clarified. The effect of nonparallelism is confirmed to be weak and, consequently, not responsible for the discrepancies between measurements and theoretical results for parallel flow.
Non linear predictive control of a LEGO mobile robot
NASA Astrophysics Data System (ADS)
Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.
2014-10-01
Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.
Uplink Packet-Data Scheduling in DS-CDMA Systems
NASA Astrophysics Data System (ADS)
Choi, Young Woo; Kim, Seong-Lyun
In this letter, we consider the uplink packet scheduling for non-real-time data users in a DS-CDMA system. As an effort to jointly optimize throughput and fairness, we formulate a time-span minimization problem incorporating the time-multiplexing of different simultaneous transmission schemes. Based on simple rules, we propose efficient scheduling algorithms and compare them with the optimal solution obtained by linear programming.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
Using Log Linear Analysis for Categorical Family Variables.
ERIC Educational Resources Information Center
Moen, Phyllis
The Goodman technique of log linear analysis is ideal for family research, because it is designed for categorical (non-quantitative) variables. Variables are dichotomized (for example, married/divorced, childless/with children) or otherwise categorized (for example, level of permissiveness, life cycle stage). Contingency tables are then…
Revisiting Isotherm Analyses Using R: Comparison of Linear, Non-linear, and Bayesian Techniques
Extensive adsorption isotherm data exist for an array of chemicals of concern on a variety of engineered and natural sorbents. Several isotherm models exist that can accurately describe these data from which the resultant fitting parameters may subsequently be used in numerical ...
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Observation Impacts for Longer Forecast Lead-Times
NASA Astrophysics Data System (ADS)
Mahajan, R.; Gelaro, R.; Todling, R.
2013-12-01
Observation impact on forecasts evaluated using adjoint-based techniques (e.g. Langland and Baker, 2004) are limited by the validity of the assumptions underlying the forecasting model adjoint. Most applications of this approach have focused on deriving observation impacts on short-range forecasts (e.g. 24-hour) in part to stay well within linearization assumptions. The most widely used measure of observation impact relies on the availability of the analysis for verifying the forecasts. As pointed out by Gelaro et al. (2007), and more recently by Todling (2013), this introduces undesirable correlations in the measure that are likely to affect the resulting assessment of the observing system. Stappers and Barkmeijer (2012) introduced a technique that, in principle, allows extending the validity of tangent linear and corresponding adjoint models to longer lead-times, thereby reducing the correlations in the measures used for observation impact assessments. The methodology provides the means to better represent linearized models by making use of Gaussian quadrature relations to handle various underlying non-linear model trajectories. The formulation is exact for particular bi-linear dynamics; it corresponds to an approximation for general-type nonlinearities and must be tested for large atmospheric models. The present work investigates the approach of Stappers and Barkmeijer (2012)in the context of NASA's Goddard Earth Observing System Version 5 (GEOS-5) atmospheric data assimilation system (ADAS). The goal is to calculate observation impacts in the GEOS-5 ADAS for forecast lead-times of at least 48 hours in order to reduce the potential for undesirable correlations that occur at shorter forecast lead times. References [1]Langland, R. H., and N. L. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189-201. [2] Gelaro, R., Y. Zhu, and R. M. Errico, 2007: Examination of various-order adjoint-based approximations of observation impact. Meteoroloische Zeitschrift, 16, 685-692. [3]Stappers, R. J. J., and J. Barkmeijer, 2012: Optimal linearization trajectories for tangent linear models. Q. J. R. Meteorol. Soc., 138, 170-184. [4] Todling, R. 2013: Comparing two approaches for assessing observation impact. Mon. Wea. Rev., 141, 1484-1505.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1985-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
In the optimal linear quadratic regulator problem for finite dimensional systems, the method known as an alpha-shift can be used to produce a closed-loop system whose spectrum lies to the left of some specified vertical line; that is, a closed-loop system with a prescribed degree of stability. This paper treats the extension of the alpha-shift to hereditary systems. As infinite dimensions, the shift can be accomplished by adding alpha times the identity to the open-loop semigroup generator and then solving an optimal regulator problem. However, this approach does not work with a new approximation scheme for hereditary control problems recently developed by Kappel and Salamon. Since this scheme is among the best to date for the numerical solution of the linear regulator problem for hereditary systems, an alternative method for shifting the closed-loop spectrum is needed. An alpha-shift technique that can be used with the Kappel-Salamon approximation scheme is developed. Both the continuous-time and discrete-time problems are considered. A numerical example which demonstrates the feasibility of the method is included.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
YORP torques with 1D thermal model
NASA Astrophysics Data System (ADS)
Breiter, S.; Bartczak, P.; Czekaj, M.
2010-11-01
A numerical model of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect for objects defined in terms of a triangular mesh is described. The algorithm requires that each surface triangle can be handled independently, which implies the use of a 1D thermal model. Insolation of each triangle is determined by an optimized ray-triangle intersection search. Surface temperature is modelled with a spectral approach; imposing a quasi-periodic solution we replace heat conduction equation by the Helmholtz equation. Non-linear boundary conditions are handled by an iterative, fast Fourier transform based solver. The results resolve the question of the YORP effect in rotation rate independence on conductivity within the non-linear 1D thermal model regardless of the accuracy issues and homogeneity assumptions. A seasonal YORP effect in attitude is revealed for objects moving on elliptic orbits when a non-linear thermal model is used.
On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2011-01-01
This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.
Guidance of Nonlinear Nonminimum-Phase Dynamic Systems
NASA Technical Reports Server (NTRS)
Devasia, Santosh
1996-01-01
The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.
Linear and Non-linear Information Flows In Rainfall Field
NASA Astrophysics Data System (ADS)
Molini, A.; La Barbera, P.; Lanza, L. G.
The rainfall process is the result of a complex framework of non-linear dynamical in- teractions between the different components of the atmosphere. It preserves the com- plexity and the intermittent features of the generating system in space and time as well as the strong dependence of these properties on the scale of observations. The understanding and quantification of how the non-linearity of the generating process comes to influence the single rain events constitute relevant research issues in the field of hydro-meteorology, especially in those applications where a timely and effective forecasting of heavy rain events is able to reduce the risk of failure. This work focuses on the characterization of the non-linear properties of the observed rain process and on the influence of these features on hydrological models. Among the goals of such a survey is the research of regular structures of the rainfall phenomenon and the study of the information flows within the rain field. The research focuses on three basic evo- lution directions for the system: in time, in space and between the different scales. In fact, the information flows that force the system to evolve represent in general a connection between the different locations in space, the different instants in time and, unless assuming the hypothesis of scale invariance is verified "a priori", the different characteristic scales. A first phase of the analysis is carried out by means of classic statistical methods, then a survey of the information flows within the field is devel- oped by means of techniques borrowed from the Information Theory, and finally an analysis of the rain signal in the time and frequency domains is performed, with par- ticular reference to its intermittent structure. The methods adopted in this last part of the work are both the classic techniques of statistical inference and a few procedures for the detection of non-linear and non-stationary features within the process starting from measured data.
An Application to the Prediction of LOD Change Based on General Regression Neural Network
NASA Astrophysics Data System (ADS)
Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.
2011-07-01
Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.
Jan, Show-Li; Shieh, Gwowen
2016-08-31
The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.
Optimal helicopter trajectory planning for terrain following flight
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1990-01-01
Helicopters operating in high threat areas have to fly close to the earth surface to minimize the risk of being detected by the adversaries. Techniques are presented for low altitude helicopter trajectory planning. These methods are based on optimal control theory and appear to be implementable onboard in realtime. Second order necessary conditions are obtained to provide a criterion for finding the optimal trajectory when more than one extremal passes through a given point. A second trajectory planning method incorporating a quadratic performance index is also discussed. Trajectory planning problem is formulated as a differential game. The objective is to synthesize optimal trajectories in the presence of an actively maneuvering adversary. Numerical methods for obtaining solutions to these problems are outlined. As an alternative to numerical method, feedback linearizing transformations are combined with the linear quadratic game results to synthesize explicit nonlinear feedback strategies for helicopter pursuit-evasion. Some of the trajectories generated from this research are evaluated on a six-degree-of-freedom helicopter simulation incorporating an advanced autopilot. The optimal trajectory planning methods presented are also useful for autonomous land vehicle guidance.
Linear-Quadratic-Gaussian Regulator Developed for a Magnetic Bearing
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.
2002-01-01
Linear-Quadratic-Gaussian (LQG) control is a modern state-space technique for designing optimal dynamic regulators. It enables us to trade off regulation performance and control effort, and to take into account process and measurement noise. The Structural Mechanics and Dynamics Branch at the NASA Glenn Research Center has developed an LQG control for a fault-tolerant magnetic bearing suspension rig to optimize system performance and to reduce the sensor and processing noise. The LQG regulator consists of an optimal state-feedback gain and a Kalman state estimator. The first design step is to seek a state-feedback law that minimizes the cost function of regulation performance, which is measured by a quadratic performance criterion with user-specified weighting matrices, and to define the tradeoff between regulation performance and control effort. The next design step is to derive a state estimator using a Kalman filter because the optimal state feedback cannot be implemented without full state measurement. Since the Kalman filter is an optimal estimator when dealing with Gaussian white noise, it minimizes the asymptotic covariance of the estimation error.
Technical note: Combining quantile forecasts and predictive distributions of streamflows
NASA Astrophysics Data System (ADS)
Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano
2017-11-01
The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.
Optimal Energy Measurement in Nonlinear Systems: An Application of Differential Geometry
NASA Technical Reports Server (NTRS)
Fixsen, Dale J.; Moseley, S. H.; Gerrits, T.; Lita, A.; Nam, S. W.
2014-01-01
Design of TES microcalorimeters requires a tradeoff between resolution and dynamic range. Often, experimenters will require linearity for the highest energy signals, which requires additional heat capacity be added to the detector. This results in a reduction of low energy resolution in the detector. We derive and demonstrate an algorithm that allows operation far into the nonlinear regime with little loss in spectral resolution. We use a least squares optimal filter that varies with photon energy to accommodate the nonlinearity of the detector and the non-stationarity of the noise. The fitting process we use can be seen as an application of differential geometry. This recognition provides a set of well-developed tools to extend our work to more complex situations. The proper calibration of a nonlinear microcalorimeter requires a source with densely spaced narrow lines. A pulsed laser multi-photon source is used here, and is seen to be a powerful tool for allowing us to develop practical systems with significant detector nonlinearity. The combination of our analysis techniques and the multi-photon laser source create a powerful tool for increasing the performance of future TES microcalorimeters.
DNS of Supersonic Turbulent Flows in a DLR Scramjet Intake
NASA Astrophysics Data System (ADS)
Li, Xinliang; Yu, Changping
2014-11-01
Direct numerical simulation (DNS) of supersonic/hypersonic flow through a DLR scramjet intake GK01 is performed. The free stream Mach numbers are 3, 5 and 7, and the angle of attack is zero degree. The DNS cases are performed by using an optimized MP scheme with adaptive dissipation (OMP-AD) developed by the authors, and the blow-and-suction perturbations near the leading edge are used to trigger the transition. To stabilize the simulation, locally non-linear flittering is used in high-Mach-number case. The transition, separation, and shock-turbulent boundary layer interaction are studied by using both flow visualization and statistical analysis. A new method, OMP-AD, is also addressed in this paper. The OMP-AD scheme is developed by using joint MP method and optimized technique, and the coefficients in the scheme are flexible to show low dissipation in the smoothing region, and to show high robust (but high dissipation) in the large gradient region. Numerical tests show that the OMP-AD is more robust than the original MP schemes, and the numerical dissipation of OMP-AD is very low.
NASA Astrophysics Data System (ADS)
Giaccu, Gian Felice
2018-05-01
Pre-tensioned cable braces are widely used as bracing systems in various structural typologies. This technology is fundamentally utilized for stiffening purposes in the case of steel and timber structures. The pre-stressing force imparted to the braces provides to the system a remarkable increment of stiffness. On the other hand, the pre-tensioning force in the braces must be properly calibrated in order to satisfactorily meet both serviceability and ultimate limit states. Dynamic properties of these systems are however affected by non-linear behavior due to potential slackening of the pre-tensioned brace. In the recent years the author has been working on a similar problem regarding the non-linear response of cables in cable-stayed bridges and braced structures. In the present paper a displacement-based approach is used to examine the non-linear behavior of a building system. The methodology operates through linearization and allows obtaining an equivalent linearized frequency to approximately characterize, mode by mode, the dynamic behavior of the system. The equivalent frequency depends on both the mechanical characteristics of the system, the pre-tensioning level assigned to the braces and a characteristic vibration amplitude. The proposed approach can be used as a simplified technique, capable of linearizing the response of structural systems, characterized by non-linearity induced by the slackening of pre-tensioned braces.
Non-Markovian optimal sideband cooling
NASA Astrophysics Data System (ADS)
Triana, Johan F.; Pachon, Leonardo A.
2018-04-01
Optimal control theory is applied to sideband cooling of nano-mechanical resonators. The formulation described here makes use of exact results derived by means of the path-integral approach of quantum dynamics, so that no approximation is invoked. It is demonstrated that the intricate interplay between time-dependent fields and structured thermal bath may lead to improve results of the sideband cooling by an order of magnitude. Cooling is quantified by means of the mean number of phonons of the mechanical modes as well as by the von Neumann entropy. Potencial extension to non-linear systems, by means of semiclassical methods, is briefly discussed.
Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.
Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha
2017-03-01
This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.
Optimization of a pressure control valve for high power automatic transmission considering stability
NASA Astrophysics Data System (ADS)
Jian, Hongchao; Wei, Wei; Li, Hongcai; Yan, Qingdong
2018-02-01
The pilot-operated electrohydraulic clutch-actuator system is widely utilized by high power automatic transmission because of the demand of large flowrate and the excellent pressure regulating capability. However, a self-excited vibration induced by the inherent non-linear characteristics of valve spool motion coupled with the fluid dynamics can be generated during the working state of hydraulic systems due to inappropriate system parameters, which causes sustaining instability in the system and leads to unexpected performance deterioration and hardware damage. To ensure a stable and fast response performance of the clutch actuator system, an optimal design method for the pressure control valve considering stability is proposed in this paper. A non-linear dynamic model of the clutch actuator system is established based on the motion of the valve spool and coupling fluid dynamics in the system. The stability boundary in the parameter space is obtained by numerical stability analysis. Sensitivity of the stability boundary and output pressure response time corresponding to the valve parameters are identified using design of experiment (DOE) approach. The pressure control valve is optimized using particle swarm optimization (PSO) algorithm with the stability boundary as constraint. The simulation and experimental results reveal that the optimization method proposed in this paper helps in improving the response characteristics while ensuring the stability of the clutch actuator system during the entire gear shift process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, J.
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
Optical measurement of the weak non-linearity in the eardrum vibration response to auditory stimuli
NASA Astrophysics Data System (ADS)
Aerts, Johan
The mammalian hearing organ consists of the external ear (auricle and ear canal) followed by the middle ear (eardrum and ossicles) and the inner ear (cochlea). Its function is to convert the incoming sound waves and convert them into nerve pulses which are processed in the final stage by the brain. The main task of the external and middle ear is to concentrate the incoming sound waves on a smaller surface to reduce the loss that would normally occur in transmission from air to inner ear fluid. In the past it has been shown that this is a linear process, thus without serious distortions, for sound waves going up to pressures of 130 dB SPL (˜90 Pa). However, at large pressure changes up to several kPa, the middle ear movement clearly shows non-linear behaviour. Thus, it is possible that some small non-linear distortions are also present in the middle ear vibration at lower sound pressures. In this thesis a sensitive measurement set-up is presented to detect this weak non-linear behaviour. Essentially, this set-up consists of a loud-speaker which excites the middle ear, and the resulting vibration is measured with an heterodyne vibrometer. The use of specially designed acoustic excitation signals (odd random phase multisines) enables the separation of the linear and non-linear response. The application of this technique on the middle ear demonstrates that there are already non-linear distortions present in the vibration of the middle ear at a sound pressure of 93 dB SPL. This non-linear component also grows strongly with increasing sound pressure. Knowledge of this non-linear component can contribute to the improvement of modern hearing aids, which operate at higher sound pressures where the non-linearities could distort the signal considerably. It is also important to know the contribution of middle ear non-linearity to otoacoustic emissions. This are non-linearities caused by the active feedback amplifier in the inner ear, and can be detected in the external and middle ear. These signals are used for diagnostic purposes, and therefore it is important to have an estimate the non-linear middle ear contribution to these emissions.
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Profiling a Mind Map User: A Descriptive Appraisal
ERIC Educational Resources Information Center
Tucker, Joanne M.; Armstrong, Gary R.; Massad, Victor J.
2010-01-01
Whether manually or through the use of software, a non-linear information organization framework known as mind mapping offers an alternative method for capturing thoughts, ideas and information to linear thinking modes such as outlining. Mind mapping is brainstorming, organizing, and problem solving. This paper examines mind mapping techniques,…
Overcoming Learning Barriers through Knowledge Management
ERIC Educational Resources Information Center
Dror, Itiel E.; Makany, Tamas; Kemp, Jonathan
2011-01-01
The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
J Squire, A Bhattacharjee
We study the magnetorotational instability (MRI) (Balbus & Hawley 1998) using non-modal stability techniques.Despite the spectral instability of many forms of the MRI, this proves to be a natural method of analysis that is well-suited to deal with the non-self-adjoint nature of the linear MRI equations. We find that the fastest growing linear MRI structures on both local and global domains can look very diff erent to the eigenmodes, invariably resembling waves shearing with the background flow (shear waves). In addition, such structures can grow many times faster than the least stable eigenmode over long time periods, and be localizedmore » in a completely di fferent region of space. These ideas lead – for both axisymmetric and non-axisymmetric modes – to a natural connection between the global MRI and the local shearing box approximation. By illustrating that the fastest growing global structure is well described by the ordinary diff erential equations (ODEs) governing a single shear wave, we find that the shearing box is a very sensible approximation for the linear MRI, contrary to many previous claims. Since the shear wave ODEs are most naturally understood using non-modal analysis techniques, we conclude by analyzing local MRI growth over finite time-scales using these methods. The strong growth over a wide range of wave-numbers suggests that non-modal linear physics could be of fundamental importance in MRI turbulence (Squire & Bhattacharjee 2014).« less
Hyperspectral scattering profiles for prediction of the microbial spoilage of beef
NASA Astrophysics Data System (ADS)
Peng, Yankun; Zhang, Jing; Wu, Jianhu; Hang, Hui
2009-05-01
Spoilage in beef is the result of decomposition and the formation of metabolites caused by the growth and enzymatic activity of microorganisms. There is still no technology for the rapid, accurate and non-destructive detection of bacterially spoiled or contaminated beef. In this study, hyperspectral imaging technique was exploited to measure biochemical changes within the fresh beef. Fresh beef rump steaks were purchased from a commercial plant, and left to spoil in refrigerator at 8°C. Every 12 hours, hyperspectral scattering profiles over the spectral region between 400 nm and 1100 nm were collected directly from the sample surface in reflection pattern in order to develop an optimal model for prediction of the beef spoilage, in parallel the total viable count (TVC) per gram of beef were obtained by classical microbiological plating methods. The spectral scattering profiles at individual wavelengths were fitted accurately by a two-parameter Lorentzian distribution function. TVC prediction models were developed, using multi-linear regression, on relating individual Lorentzian parameters and their combinations at different wavelengths to log10(TVC) value. The best predictions were obtained with r2= 0.96 and SEP = 0.23 for log10(TVC). The research demonstrated that hyperspectral imaging technique is a valid tool for real-time and non-destructive detection of bacterial spoilage in beef.
A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.
2016-12-01
It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.
An ultra-low-power filtering technique for biomedical applications.
Zhang, Tan-Tan; Mak, Pui-In; Vai, Mang-I; Mak, Peng-Un; Wan, Feng; Martins, R P
2011-01-01
This paper describes an ultra-low-power filtering technique for biomedical applications designated as T-wave sensing in heart-activities detection systems. The topology is based on a source-follower-based Biquad operating in the sub-threshold region. With the intrinsic advantages of simplicity and high linearity of the source-follower, ultra-low-cutoff filtering can be achieved, simultaneously with ultra low power and good linearity. An 8(th)-order 2.4-Hz lowpass filter design example optimized in a 0.35-μm CMOS process was designed achieving over 85-dB dynamic range, 74-dB stopband attenuation and consuming only 0.36 nW at a 3-V supply.
Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects
NASA Technical Reports Server (NTRS)
Green Robert O.; Moreno, Jose F.
1996-01-01
AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.
TH-B-207B-00: Pediatric Image Quality Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
TH-B-207B-01: Optimizing Pediatric CT in the Emergency Department
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodge, C.
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
Intensity Biased PSP Measurement
NASA Technical Reports Server (NTRS)
Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.
2000-01-01
The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub 0)/I) and pressure ratio (P/P(sub 0)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and c quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an in- situ intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP)) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.
Intensity Biased PSP Measurement
NASA Technical Reports Server (NTRS)
Subramanian, Chelakara S.; Amer, Tahani R.; Oglesby, Donald M.; Burkett, Cecil G., Jr.
2000-01-01
The current pressure sensitive paint (PSP) technique assumes a linear relationship (Stern-Volmer Equation) between intensity ratio (I(sub o)/I) and pressure ratio (P/P(sub o)) over a wide range of pressures (vacuum to ambient or higher). Although this may be valid for some PSPs, in most PSPs the relationship is nonlinear, particularly at low pressures (less than 0.2 psia when the oxygen level is low). This non-linearity can be attributed to variations in the oxygen quenching (de-activation) rates (which otherwise is assumed constant) at these pressures. Other studies suggest that some paints also have non-linear calibrations at high pressures; because of heterogeneous (non-uniform) oxygen diffusion and quenching. Moreover, pressure sensitive paints require correction for the output intensity due to light intensity variation, paint coating variation, model dynamics, wind-off reference pressure variation, and temperature sensitivity. Therefore to minimize the measurement uncertainties due to these causes, an insitu intensity correction method was developed. A non-oxygen quenched paint (which provides a constant intensity at all pressures, called non-pressure sensitive paint, NPSP) was used for the reference intensity (I(sub NPSP) with respect to which all the PSP intensities (I) were measured. The results of this study show that in order to fully reap the benefits of this technique, a totally oxygen impermeable NPSP must be available.
Fraser, Kirk A.; St-Georges, Lyne; Kiss, Laszlo I.
2014-01-01
Recognition of the friction stir welding process is growing in the aeronautical and aero-space industries. To make the process more available to the structural fabrication industry (buildings and bridges), being able to model the process to determine the highest speed of advance possible that will not cause unwanted welding defects is desirable. A numerical solution to the transient two-dimensional heat diffusion equation for the friction stir welding process is presented. A non-linear heat generation term based on an arbitrary piecewise linear model of friction as a function of temperature is used. The solution is used to solve for the temperature distribution in the Al 6061-T6 work pieces. The finite difference solution of the non-linear problem is used to perform a Monte-Carlo simulation (MCS). A polynomial response surface (maximum welding temperature as a function of advancing and rotational speed) is constructed from the MCS results. The response surface is used to determine the optimum tool speed of advance and rotational speed. The exterior penalty method is used to find the highest speed of advance and the associated rotational speed of the tool for the FSW process considered. We show that good agreement with experimental optimization work is possible with this simplified model. Using our approach an optimal weld pitch of 0.52 mm/rev is obtained for 3.18 mm thick AA6061-T6 plate. Our method provides an estimate of the optimal welding parameters in less than 30 min of calculation time. PMID:28788627
Fraser, Kirk A; St-Georges, Lyne; Kiss, Laszlo I
2014-04-30
Recognition of the friction stir welding process is growing in the aeronautical and aero-space industries. To make the process more available to the structural fabrication industry (buildings and bridges), being able to model the process to determine the highest speed of advance possible that will not cause unwanted welding defects is desirable. A numerical solution to the transient two-dimensional heat diffusion equation for the friction stir welding process is presented. A non-linear heat generation term based on an arbitrary piecewise linear model of friction as a function of temperature is used. The solution is used to solve for the temperature distribution in the Al 6061-T6 work pieces. The finite difference solution of the non-linear problem is used to perform a Monte-Carlo simulation (MCS). A polynomial response surface (maximum welding temperature as a function of advancing and rotational speed) is constructed from the MCS results. The response surface is used to determine the optimum tool speed of advance and rotational speed. The exterior penalty method is used to find the highest speed of advance and the associated rotational speed of the tool for the FSW process considered. We show that good agreement with experimental optimization work is possible with this simplified model. Using our approach an optimal weld pitch of 0.52 mm/rev is obtained for 3.18 mm thick AA6061-T6 plate. Our method provides an estimate of the optimal welding parameters in less than 30 min of calculation time.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
D'Autry, Ward; Wolfs, Kris; Hoogmartens, Jos; Adams, Erwin; Van Schepdael, Ann
2011-07-01
Gas chromatography-mass spectrometry is a well established analytical technique. However, mass spectrometers with electron ionization sources may suffer from signal drifts, hereby negatively influencing quantitative performance. To demonstrate this phenomenon for a real application, a static headspace-gas chromatography method in combination with electron ionization-quadrupole mass spectrometry was optimized for the determination of residual dichloromethane in coronary stent coatings. Validating the method, the quantitative performance of an original stainless steel ion source was compared to that of a modified ion source. Ion source modification included the application of a gold coating on the repeller and exit plate. Several validation aspects such as limit of detection, limit of quantification, linearity and precision were evaluated using both ion sources. It was found that, as expected, the stainless steel ion source suffered from signal drift. As a consequence, non-linearity and high RSD values for repeated analyses were obtained. An additional experiment was performed to check whether an internal standard compound would lead to better results. It was found that the signal drift patterns of the analyte and internal standard were different, consequently leading to high RSD values for the response factor. With the modified ion source however, a more stable signal was observed resulting in acceptable linearity and precision. Moreover, it was also found that sensitivity improved compared to the stainless steel ion source. Finally, the optimized method with the modified ion source was applied to determine residual dichloromethane in the coating of coronary stents. The solvent was detected but found to be below the limit of quantification. Copyright © 2011 Elsevier B.V. All rights reserved.
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
Optimal Operation of a Thermal Energy Storage Tank Using Linear Optimization
NASA Astrophysics Data System (ADS)
Civit Sabate, Carles
In this thesis, an optimization procedure for minimizing the operating costs of a Thermal Energy Storage (TES) tank is presented. The facility in which the optimization is based is the combined cooling, heating, and power (CCHP) plant at the University of California, Irvine. TES tanks provide the ability of decoupling the demand of chilled water from its generation, over the course of a day, from the refrigeration and air-conditioning plants. They can be used to perform demand-side management, and optimization techniques can help to approach their optimal use. The proposed optimization approach provides a fast and reliable methodology of finding the optimal use of the TES tank to reduce energy costs and provides a tool for future implementation of optimal control laws on the system. Advantages of the proposed methodology are studied using simulation with historical data.
NASA Astrophysics Data System (ADS)
Demircioğlu, Zeynep; Özdemir, Fethi Ahmet; Dayan, Osman; Şerbetçi, Zafer; Özdemir, Namık
2018-06-01
Synthesized compounds of N-(2-aminophenyl)benzenesulfonamide 1 and (Z)-N-(2-((2-nitrobenzylidene)amino)phenyl)benzenesulfonamide 2 were characterized by antimicrobial activity, FT-IR, 1H and 13C NMR. Two new Schiff base ligands containing aromatic sulfonamide fragment of (Z)-N-(2-((3-nitrobenzylidene)amino)phenyl)benzenesulfonamide 3 and (Z)-N-(2-((4-nitrobenzylidene)amino)phenyl)benzenesulfonamide 4 were synthesized and investigated by spectroscopic techniques including 1H and 13C NMR, FT-IR, single crystal X-ray diffraction, Hirshfeld surface, theoretical method analyses and by antimicrobial activity. The molecular geometry obtained from the X-ray structure determination was optimized Density Functional Theory (DFT/B3LYP) method with the 6-311++G(d,p) basis set in ground state. From the optimized geometry of the molecules of 3 and 4, the geometric parameters, vibrational wavenumbers and chemical shifts were computed. The optimized geometry results, which were well represented the X-ray data, were shown that the chosen of DFT/B3LYP 6-311G++(d,p) was a successful choice. After a successful optimization, frontier molecular orbitals, chemical activity, non-linear optical properties (NLO), molecular electrostatic mep (MEP), Mulliken population method, natural population analysis (NPA) and natural bond orbital analysis (NBO), which cannot be obtained experimentally, were calculated and investigated.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Hunsicker, Mary E; Kappel, Carrie V; Selkoe, Kimberly A; Halpern, Benjamin S; Scarborough, Courtney; Mease, Lindley; Amrhein, Alisan
2016-04-01
Scientists and resource managers often use methods and tools that assume ecosystem components respond linearly to environmental drivers and human stressors. However, a growing body of literature demonstrates that many relationships are-non-linear, where small changes in a driver prompt a disproportionately large ecological response. We aim to provide a comprehensive assessment of the relationships between drivers and ecosystem components to identify where and when non-linearities are likely to occur. We focused our analyses on one of the best-studied marine systems, pelagic ecosystems, which allowed us to apply robust statistical techniques on a large pool of previously published studies. In this synthesis, we (1) conduct a wide literature review on single driver-response relationships in pelagic systems, (2) use statistical models to identify the degree of non-linearity in these relationships, and (3) assess whether general patterns exist in the strengths and shapes of non-linear relationships across drivers. Overall we found that non-linearities are common in pelagic ecosystems, comprising at least 52% of all driver-response relation- ships. This is likely an underestimate, as papers with higher quality data and analytical approaches reported non-linear relationships at a higher frequency (on average 11% more). Consequently, in the absence of evidence for a linear relationship, it is safer to assume a relationship is non-linear. Strong non-linearities can lead to greater ecological and socioeconomic consequences if they are unknown (and/or unanticipated), but if known they may provide clear thresholds to inform management targets. In pelagic systems, strongly non-linear relationships are often driven by climate and trophodynamic variables but are also associated with local stressors, such as overfishing and pollution, that can be more easily controlled by managers. Even when marine resource managers cannot influence ecosystem change, they can use information about threshold responses to guide how other stressors are managed and to adapt to new ocean conditions. As methods to detect and reduce uncertainty around threshold values improve, managers will be able to better understand and account for ubiquitous non-linear relationships.
Parameter and Structure Inference for Nonlinear Dynamical Systems
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Millonas, Mark
2006-01-01
A great many systems can be modeled in the non-linear dynamical systems framework, as x = f(x) + xi(t), where f() is the potential function for the system, and xi is the excitation noise. Modeling the potential using a set of basis functions, we derive the posterior for the basis coefficients. A more challenging problem is to determine the set of basis functions that are required to model a particular system. We show that using the Bayesian Information Criteria (BIC) to rank models, and the beam search technique, that we can accurately determine the structure of simple non-linear dynamical system models, and the structure of the coupling between non-linear dynamical systems where the individual systems are known. This last case has important ecological applications.