Sample records for find optimal parameters

  1. Design of Experiments for the Thermal Characterization of Metallic Foam

    NASA Technical Reports Server (NTRS)

    Crittenden, Paul E.; Cole, Kevin D.

    2003-01-01

    Metallic foams are being investigated for possible use in the thermal protection systems of reusable launch vehicles. As a result, the performance of these materials needs to be characterized over a wide range of temperatures and pressures. In this paper a radiation/conduction model is presented for heat transfer in metallic foams. Candidates for the optimal transient experiment to determine the intrinsic properties of the model are found by two methods. First, an optimality criterion is used to find an experiment to find all of the parameters using one heating event. Second, a pair of heating events is used to determine the parameters in which one heating event is optimal for finding the parameters related to conduction, while the other heating event is optimal for finding the parameters associated with radiation. Simulated data containing random noise was analyzed to determine the parameters using both methods. In all cases the parameter estimates could be improved by analyzing a larger data record than suggested by the optimality criterion.

  2. Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.

    PubMed

    Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis

    2008-10-01

    We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)

  3. Case study: Optimizing fault model input parameters using bio-inspired algorithms

    NASA Astrophysics Data System (ADS)

    Plucar, Jan; Grunt, Onřej; Zelinka, Ivan

    2017-07-01

    We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.

  4. An optimal generic model for multi-parameters and big data optimizing: a laboratory experimental study

    NASA Astrophysics Data System (ADS)

    Utama, D. N.; Ani, N.; Iqbal, M. M.

    2018-03-01

    Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.

  5. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  6. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  7. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  8. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.

  9. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  10. Use of multilevel modeling for determining optimal parameters of heat supply systems

    NASA Astrophysics Data System (ADS)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.

  11. Multiple-Objective Optimal Designs for Studying the Dose Response Function and Interesting Dose Levels

    PubMed Central

    Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557

  12. Multiple-Objective Optimal Designs for Studying the Dose Response Function and Interesting Dose Levels.

    PubMed

    Hyun, Seung Won; Wong, Weng Kee

    2015-11-01

    We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.

  13. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  14. Estimating parameters with pre-specified accuracies in distributed parameter systems using optimal experiment design

    NASA Astrophysics Data System (ADS)

    Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den

    2016-08-01

    Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.

  15. Improving hot region prediction by parameter optimization of density clustering in PPI.

    PubMed

    Hu, Jing; Zhang, Xiaolong

    2016-11-01

    This paper proposed an optimized algorithm which combines density clustering of parameter selection with feature-based classification for hot region prediction. First, all the residues are classified by SVM to remove non-hot spot residues, then density clustering of parameter selection is used to find hot regions. In the density clustering, this paper studies how to select input parameters. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius. Copyright © 2016. Published by Elsevier Inc.

  16. A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process

    NASA Astrophysics Data System (ADS)

    Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa

    2017-06-01

    High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.

  17. Investigation into the influence of laser energy input on selective laser melted thin-walled parts by response surface method

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui

    2018-04-01

    Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.

  18. Autopilot for frequency-modulation atomic force microscopy.

    PubMed

    Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri

    2015-10-01

    One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.

  19. Autopilot for frequency-modulation atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri

    2015-10-01

    One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.

  20. Autopilot for frequency-modulation atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il

    2015-10-15

    One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less

  1. Optimization of a Small Scale Linear Reluctance Accelerator

    NASA Astrophysics Data System (ADS)

    Barrera, Thor; Beard, Robby

    2011-11-01

    Reluctance accelerators are extremely promising future methods of transportation. Several problems still plague these devices, most prominently low efficiency. Variables to overcoming efficiency problems are many and difficult to correlate how they affect our accelerator. The study examined several differing variables that present potential challenges in optimizing the efficiency of reluctance accelerators. These include coil and projectile design, power supplies, switching, and the elusive gradient inductance problem. Extensive research in these areas has been performed from computational and theoretical to experimental. Findings show that these parameters share significant similarity to transformer design elements, thus general findings show current optimized parameters the research suggests as a baseline for further research and design. Demonstration of these current findings will be offered at the time of presentation.

  2. Optimized positioning of autonomous surgical lamps

    NASA Astrophysics Data System (ADS)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  3. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  4. Optimal convergence in naming game with geography-based negotiation on small-world networks

    NASA Astrophysics Data System (ADS)

    Liu, Run-Ran; Wang, Wen-Xu; Lai, Ying-Cheng; Chen, Guanrong; Wang, Bing-Hong

    2011-01-01

    We propose a negotiation strategy to address the effect of geography on the dynamics of naming games over small-world networks. Communication and negotiation frequencies between two agents are determined by their geographical distance in terms of a parameter characterizing the correlation between interaction strength and the distance. A finding is that there exists an optimal parameter value leading to fastest convergence to global consensus on naming. Numerical computations and a theoretical analysis are provided to substantiate our findings.

  5. Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monras, Alex; Illuminati, Fabrizio

    2011-01-15

    We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form ofmore » compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.« less

  6. SPECT System Optimization Against A Discrete Parameter Space

    PubMed Central

    Meng, L. J.; Li, N.

    2013-01-01

    In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609

  7. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  8. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  9. Optimal correction and design parameter search by modern methods of rigorous global optimization

    NASA Astrophysics Data System (ADS)

    Makino, K.; Berz, M.

    2011-07-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.

  10. Analysis and optimization of machining parameters of laser cutting for polypropylene composite

    NASA Astrophysics Data System (ADS)

    Deepa, A.; Padmanabhan, K.; Kuppan, P.

    2017-11-01

    Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.

  11. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  12. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  13. Genetic Algorithm Optimizes Q-LAW Control Parameters

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  14. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  15. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  16. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  17. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  18. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Numerical Parameter Optimization of the Ignition and Growth Model for HMX Based Plastic Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence

    2017-06-01

    We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.

  20. Utility of coupling nonlinear optimization methods with numerical modeling software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.J.

    1996-08-05

    Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less

  1. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  2. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    PubMed

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  3. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  4. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  5. Parameter estimation of a pulp digester model with derivative-free optimization strategies

    NASA Astrophysics Data System (ADS)

    Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.

    2017-07-01

    The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.

  6. Metabolic regulation and maximal reaction optimization in the central metabolism of a yeast cell

    NASA Astrophysics Data System (ADS)

    Kasbawati, Gunawan, A. Y.; Hertadi, R.; Sidarto, K. A.

    2015-03-01

    Regulation of fluxes in a metabolic system aims to enhance the production rates of biotechnologically important compounds. Regulation is held via modification the cellular activities of a metabolic system. In this study, we present a metabolic analysis of ethanol fermentation process of a yeast cell in terms of continuous culture scheme. The metabolic regulation is based on the kinetic formulation in combination with metabolic control analysis to indicate the key enzymes which can be modified to enhance ethanol production. The model is used to calculate the intracellular fluxes in the central metabolism of the yeast cell. Optimal control is then applied to the kinetic model to find the optimal regulation for the fermentation system. The sensitivity results show that there are external and internal control parameters which are adjusted in enhancing ethanol production. As an external control parameter, glucose supply should be chosen in appropriate way such that the optimal ethanol production can be achieved. For the internal control parameter, we find three enzymes as regulation targets namely acetaldehyde dehydrogenase, pyruvate decarboxylase, and alcohol dehydrogenase which reside in the acetaldehyde branch. Among the three enzymes, however, only acetaldehyde dehydrogenase has a significant effect to obtain optimal ethanol production efficiently.

  7. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  8. Launch Vehicle Propulsion Design with Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.

    2005-01-01

    The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi

    ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less

  10. Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.

    PubMed

    Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian

    2017-01-01

    Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.

  11. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  12. Application of the advanced engineering environment for optimization energy consumption in designed vehicles

    NASA Astrophysics Data System (ADS)

    Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.

    2016-08-01

    Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.

  13. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    NASA Astrophysics Data System (ADS)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  14. Solving fuel-optimal low-thrust orbital transfers with bang-bang control using a novel continuation technique

    NASA Astrophysics Data System (ADS)

    Zhu, Zhengfan; Gan, Qingbo; Yang, Xin; Gao, Yang

    2017-08-01

    We have developed a novel continuation technique to solve optimal bang-bang control for low-thrust orbital transfers considering the first-order necessary optimality conditions derived from Lawden's primer vector theory. Continuation on the thrust amplitude is mainly described in this paper. Firstly, a finite-thrust transfer with an ;On-Off-On; thrusting sequence is modeled using a two-impulse transfer as initial solution, and then the thrust amplitude is decreased gradually to find an optimal solution with minimum thrust. Secondly, the thrust amplitude is continued from its minimum value to positive infinity to find the optimal bang-bang control, and a thrust switching principle is employed to determine the control structure by monitoring the variation of the switching function. In the continuation process, a bifurcation of bang-bang control is revealed and the concept of critical thrust is proposed to illustrate this phenomenon. The same thrust switching principle is also applicable to the continuation on other parameters, such as transfer time, orbital phase angle, etc. By this continuation technique, fuel-optimal orbital transfers with variable mission parameters can be found via an automated algorithm, and there is no need to provide an initial guess for the costate variables. Moreover, continuation is implemented in the solution space of bang-bang control that is either optimal or non-optimal, which shows that a desired solution of bang-bang control is obtained via continuation on a single parameter starting from an existing solution of bang-bang control. Finally, numerical examples are presented to demonstrate the effectiveness of the proposed continuation technique. Specifically, this continuation technique provides an approach to find multiple solutions satisfying the first-order necessary optimality conditions to the same orbital transfer problem, and a continuation strategy is presented as a preliminary approach for solving the bang-bang control of many-revolution orbital transfers.

  15. Automatic design optimization tool for passive structural control systems

    NASA Astrophysics Data System (ADS)

    Mojolic, Cristian; Hulea, Radu; Parv, Bianca Roxana

    2017-07-01

    The present paper proposes an automatic dynamic process in order to find the parameters of the seismic isolation systems applied to large span structures. Three seismic isolation solutions are proposed for the model of the new Slatina Sport Hall. The first case uses friction pendulum system (FP), the second one uses High Damping Rubber Bearing (HDRB) and Lead Rubber Bearings, while (LRB) are used for the last case of isolation. The placement of the isolation level is at the top end of the roof supporting columns. The aim is to calculate the parameters of each isolation system so that the whole's structure first vibration periods is the one desired by the user. The model is computed with the use of SAP2000 software. In order to find the best solution for the optimization problem, an optimization process based on Genetic Algorithms (GA) has been developed in Matlab. With the use of the API (Application Programming Interface) libraries a two way link is created between the two programs in order to exchange results and link parameters. The main goal is to find the best seismic isolation method for each desired modal period so that the bending moment on the supporting columns should be minimum.

  16. Parameter optimization for surface flux transport models

    NASA Astrophysics Data System (ADS)

    Whitbread, T.; Yeates, A. R.; Muñoz-Jaramillo, A.; Petrie, G. J. D.

    2017-11-01

    Accurate prediction of solar activity calls for precise calibration of solar cycle models. Consequently we aim to find optimal parameters for models which describe the physical processes on the solar surface, which in turn act as proxies for what occurs in the interior and provide source terms for coronal models. We use a genetic algorithm to optimize surface flux transport models using National Solar Observatory (NSO) magnetogram data for Solar Cycle 23. This is applied to both a 1D model that inserts new magnetic flux in the form of idealized bipolar magnetic regions, and also to a 2D model that assimilates specific shapes of real active regions. The genetic algorithm searches for parameter sets (meridional flow speed and profile, supergranular diffusivity, initial magnetic field, and radial decay time) that produce the best fit between observed and simulated butterfly diagrams, weighted by a latitude-dependent error structure which reflects uncertainty in observations. Due to the easily adaptable nature of the 2D model, the optimization process is repeated for Cycles 21, 22, and 24 in order to analyse cycle-to-cycle variation of the optimal solution. We find that the ranges and optimal solutions for the various regimes are in reasonable agreement with results from the literature, both theoretical and observational. The optimal meridional flow profiles for each regime are almost entirely within observational bounds determined by magnetic feature tracking, with the 2D model being able to accommodate the mean observed profile more successfully. Differences between models appear to be important in deciding values for the diffusive and decay terms. In like fashion, differences in the behaviours of different solar cycles lead to contrasts in parameters defining the meridional flow and initial field strength.

  17. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  18. Extremal Optimization: Methods Derived from Co-Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.G.

    1999-07-13

    We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less

  19. Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Gu, L.

    2016-12-01

    Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.

  20. Importance of double-pole CFS-PML for broad-band seismic wave simulation and optimal parameters selection

    NASA Astrophysics Data System (ADS)

    Feng, Haike; Zhang, Wei; Zhang, Jie; Chen, Xiaofei

    2017-05-01

    The perfectly matched layer (PML) is an efficient absorbing technique for numerical wave simulation. The complex frequency-shifted PML (CFS-PML) introduces two additional parameters in the stretching function to make the absorption frequency dependent. This can help to suppress converted evanescent waves from near grazing incident waves, but does not efficiently absorb low-frequency waves below the cut-off frequency. To absorb both the evanescent wave and the low-frequency wave, the double-pole CFS-PML having two poles in the coordinate stretching function was developed in computational electromagnetism. Several studies have investigated the performance of the double-pole CFS-PML for seismic wave simulations in the case of a narrowband seismic wavelet and did not find significant difference comparing to the CFS-PML. Another difficulty to apply the double-pole CFS-PML for real problems is that a practical strategy to set optimal parameter values has not been established. In this work, we study the performance of the double-pole CFS-PML for broad-band seismic wave simulation. We find that when the maximum to minimum frequency ratio is larger than 16, the CFS-PML will either fail to suppress the converted evanescent waves for grazing incident waves, or produce visible low-frequency reflection, depending on the value of α. In contrast, the double-pole CFS-PML can simultaneously suppress the converted evanescent waves and avoid low-frequency reflections with proper parameter values. We analyse the different roles of the double-pole CFS-PML parameters and propose optimal selections of these parameters. Numerical tests show that the double-pole CFS-PML with the optimal parameters can generate satisfactory results for broad-band seismic wave simulations.

  1. Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity

    NASA Astrophysics Data System (ADS)

    Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.

    2017-10-01

    A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.

  2. LDRD Final Report: Global Optimization for Engineering Science Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  3. Automated parameter tuning applied to sea ice in a global climate model

    NASA Astrophysics Data System (ADS)

    Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.

    2018-01-01

    This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.

  4. Optimal design and control of an electromechanical transfemoral prosthesis with energy regeneration.

    PubMed

    Rohani, Farbod; Richter, Hanz; van den Bogert, Antonie J

    2017-01-01

    In this paper, we present the design of an electromechanical above-knee active prosthesis with energy storage and regeneration. The system consists of geared knee and ankle motors, parallel springs for each motor, an ultracapacitor, and controllable four-quadrant power converters. The goal is to maximize the performance of the system by finding optimal controls and design parameters. A model of the system dynamics was developed, and used to solve a combined trajectory and design optimization problem. The objectives of the optimization were to minimize tracking error relative to human joint motions, as well as energy use. The optimization problem was solved by the method of direct collocation, based on joint torque and joint angle data from ten subjects walking at three speeds. After optimization of controls and design parameters, the simulated system could operate at zero energy cost while still closely emulating able-bodied gait. This was achieved by controlled energy transfer between knee and ankle, and by controlled storage and release of energy throughout the gait cycle. Optimal gear ratios and spring parameters were similar across subjects and walking speeds.

  5. Warpage investigation on side arms using response surface methodology (RSM) and glow-worm swarm optimizations (GSO)

    NASA Astrophysics Data System (ADS)

    Sow, C. K.; Fathullah, M.; Nasir, S. M.; Shayfull, Z.; Shazzuan, S.

    2017-09-01

    This paper discusses on an analysis run via injection moulding process in determination of the optimum processing parameters used for manufacturing side arms of catheters in minimizing the warpage issues. The optimization method used was RSM. Moreover, in this research tries to find the most significant factor affecting the warpage. From the previous literature review,4 most significant parameters on warpage defect was selected. Those parameters were melt temperature, packing time, packing pressure, mould temperature and cooling time. At the beginning, side arm was drawn using software of CATIA V5. Then, software Mouldflow and Design Expert were employed to analyses on the popular warpage issues. After that, GSO artificial intelligence was apply using the mathematical model from Design Expert for more optimization on RSM result. Recommended parameter settings from the simulation work were then compared with the optimization work of RSM and GSO. The result show that the warpage on the side arm was improved by 3.27 %

  6. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.

  7. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  8. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  9. Research on intrusion detection based on Kohonen network and support vector machine

    NASA Astrophysics Data System (ADS)

    Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi

    2018-05-01

    In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.

  10. Parameter optimization for the visco-hyperelastic constitutive model of tendon using FEM.

    PubMed

    Tang, C Y; Ng, G Y F; Wang, Z W; Tsui, C P; Zhang, G

    2011-01-01

    Numerous constitutive models describing the mechanical properties of tendons have been proposed during the past few decades. However, few were widely used owing to the lack of implementation in the general finite element (FE) software, and very few systematic studies have been done on selecting the most appropriate parameters for these constitutive laws. In this work, the visco-hyperelastic constitutive model of the tendon implemented through the use of three-parameter Mooney-Rivlin form and sixty-four-parameter Prony series were firstly analyzed using ANSYS FE software. Afterwards, an integrated optimization scheme was developed by coupling two optimization toolboxes (OPTs) of ANSYS and MATLAB for estimating these unknown constitutive parameters of the tendon. Finally, a group of Sprague-Dawley rat tendons was used to execute experimental and numerical simulation investigation. The simulated results showed good agreement with the experimental data. An important finding revealed that too many Maxwell elements was not necessary for assuring accuracy of the model, which is often neglected in most open literatures. Thus, all these proved that the constitutive parameter optimization scheme was reliable and highly efficient. Furthermore, the approach can be extended to study other tendons or ligaments, as well as any visco-hyperelastic solid materials.

  11. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    PubMed Central

    Sánchez, Daniela; Melin, Patricia

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition. PMID:28894461

  12. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition.

    PubMed

    Sánchez, Daniela; Melin, Patricia; Castillo, Oscar

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  13. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    NASA Astrophysics Data System (ADS)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  14. Prediction and optimization of the laccase-mediated synthesis of the antimicrobial compound iodine (I2).

    PubMed

    Schubert, M; Fey, A; Ihssen, J; Civardi, C; Schwarze, F W M R; Mourad, S

    2015-01-10

    An artificial neural network (ANN) and genetic algorithm (GA) were applied to improve the laccase-mediated oxidation of iodide (I(-)) to elemental iodine (I2). Biosynthesis of iodine (I2) was studied with a 5-level-4-factor central composite design (CCD). The generated ANN network was mathematically evaluated by several statistical indices and revealed better results than a classical quadratic response surface (RS) model. Determination of the relative significance of model input parameters, ranking the process parameters in order of importance (pH>laccase>mediator>iodide), was performed by sensitivity analysis. ANN-GA methodology was used to optimize the input space of the neural network model to find optimal settings for the laccase-mediated synthesis of iodine. ANN-GA optimized parameters resulted in a 9.9% increase in the conversion rate. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  16. Automated parameterization of intermolecular pair potentials using global optimization techniques

    NASA Astrophysics Data System (ADS)

    Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk

    2014-12-01

    In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.

  17. Valence electronic structure of cobalt phthalocyanine from an optimally tuned range-separated hybrid functional.

    PubMed

    Brumboiu, Iulia Emilia; Prokopiou, Georgia; Kronik, Leeor; Brena, Barbara

    2017-07-28

    We analyse the valence electronic structure of cobalt phthalocyanine (CoPc) by means of optimally tuning a range-separated hybrid functional. The tuning is performed by modifying both the amount of short-range exact exchange (α) included in the hybrid functional and the range-separation parameter (γ), with two strategies employed for finding the optimal γ for each α. The influence of these two parameters on the structural, electronic, and magnetic properties of CoPc is thoroughly investigated. The electronic structure is found to be very sensitive to the amount and range in which the exact exchange is included. The electronic structure obtained using the optimal parameters is compared to gas-phase photo-electron data and GW calculations, with the unoccupied states additionally compared with inverse photo-electron spectroscopy measurements. The calculated spectrum with tuned γ, determined for the optimal value of α = 0.1, yields a very good agreement with both experimental results and with GW calculations that well-reproduce the experimental data.

  18. Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains

    NASA Astrophysics Data System (ADS)

    Zhang, Xiong-Peng; Shao, Bin; Zou, Jian

    2017-05-01

    Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.

  19. Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load

    NASA Astrophysics Data System (ADS)

    Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.

    2017-12-01

    Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.

  20. Searching for the optimal synthesis parameters of InP/CdxZn1-xSe quantum dots when combined with a broad band phosphor to optimize color rendering and efficacy of a hybrid remote phosphor white LED

    NASA Astrophysics Data System (ADS)

    Ryckaert, Jana; Correia, António; Smet, Kevin; Tessier, Mickael D.; Dupont, Dorian; Hens, Zeger; Hanselaer, Peter; Meuret, Youri

    2017-09-01

    Combining traditional phosphors with a broad emission spectrum and non-scattering quantum dots with a narrow emission spectrum can have multiple advantages for white LEDs. It allows to reduce the amount of scattering in the wavelength conversion element, increasing the efficiency of the complete system. Furthermore, the unique possibility to tune the emission spectrum of quantum dots allows to optimize the resulting LED spectrum in order to achieve optimal color rendering properties for the light source. However, finding the optimal quantum dot properties to achieve optimal efficacy and color rendering is a non-trivial task. Instead of simply summing up the emission spectra of the blue LED, phosphor and quantum dots, we propose a complete simulation tool that allows an accurate analysis of the final performance for a range of different quantum dot synthesis parameters. The recycling of the reflected light from the wavelength conversion element by the LED package is taken into account, as well as the re-absorption and the associated red-shift. This simulation tool is used to vary two synthesis parameters (core size and cadmium fraction) of InP/CdxZn1-xSe quantum dots. We find general trends for the ideal quantum dot that should be combined with a specific YAG:Ce broad band phosphor to obtain optimal efficiency and color rendering for a white LED with a specific pumping LED and recycling cavity, with a desired CCT of 3500K.

  1. Simultaneous detection of resolved glutamate, glutamine, and γ-aminobutyric acid at 4 T

    NASA Astrophysics Data System (ADS)

    Hu, Jiani; Yang, Shaolin; Xuan, Yang; Jiang, Quan; Yang, Yihong; Haacke, E. Mark

    2007-04-01

    A new approach is introduced to simultaneously detect resolved glutamate (Glu), glutamine (Gln), and γ-aminobutyric acid (GABA) using a standard STEAM localization pulse sequence with the optimized sequence timing parameters. This approach exploits the dependence of the STEAM spectra of the strongly coupled spin systems of Glu, Gln, and GABA on the echo time TE and the mixing time TM at 4 T to find an optimized sequence parameter set, i.e., {TE, TM}, where the outer-wings of the Glu C4 multiplet resonances around 2.35 ppm, the Gln C4 multiplet resonances around 2.45 ppm, and the GABA C2 multiplet resonance around 2.28 ppm are significantly suppressed and the three resonances become virtual singlets simultaneously and thus resolved. Spectral simulation and optimization were conducted to find the optimized sequence parameters, and phantom and in vivo experiments (on normal human brains, one patient with traumatic brain injury, and one patient with brain tumor) were carried out for verification. The results have demonstrated that the Gln, Glu, and GABA signals at 2.2-2.5 ppm can be well resolved using a standard STEAM sequence with the optimized sequence timing parameters around {82 ms, 48 ms} at 4 T, while the other main metabolites, such as N-acetyl aspartate (NAA), choline (tCho), and creatine (tCr), are still preserved in the same spectrum. The technique can be easily implemented and should prove to be a useful tool for the basic and clinical studies associated with metabolism of Glu, Gln, and/or GABA.

  2. Finding optimal vaccination strategies for pandemic influenza using genetic algorithms.

    PubMed

    Patel, Rajan; Longini, Ira M; Halloran, M Elizabeth

    2005-05-21

    In the event of pandemic influenza, only limited supplies of vaccine may be available. We use stochastic epidemic simulations, genetic algorithms (GA), and random mutation hill climbing (RMHC) to find optimal vaccine distributions to minimize the number of illnesses or deaths in the population, given limited quantities of vaccine. Due to the non-linearity, complexity and stochasticity of the epidemic process, it is not possible to solve for optimal vaccine distributions mathematically. However, we use GA and RMHC to find near optimal vaccine distributions. We model an influenza pandemic that has age-specific illness attack rates similar to the Asian pandemic in 1957-1958 caused by influenza A(H2N2), as well as a distribution similar to the Hong Kong pandemic in 1968-1969 caused by influenza A(H3N2). We find the optimal vaccine distributions given that the number of doses is limited over the range of 10-90% of the population. While GA and RMHC work well in finding optimal vaccine distributions, GA is significantly more efficient than RMHC. We show that the optimal vaccine distribution found by GA and RMHC is up to 84% more effective than random mass vaccination in the mid range of vaccine availability. GA is generalizable to the optimization of stochastic model parameters for other infectious diseases and population structures.

  3. Loss-resistant unambiguous phase measurement

    NASA Astrophysics Data System (ADS)

    Dinani, Hossein T.; Berry, Dominic W.

    2014-08-01

    Entangled multiphoton states have the potential to provide improved measurement accuracy, but are sensitive to photon loss. It is possible to calculate ideal loss-resistant states that maximize the Fisher information, but it is unclear how these could be experimentally generated. Here we propose a set of states that can be obtained by processing the output from parametric down-conversion. Although these states are not optimal, they provide performance very close to that of optimal states for a range of parameters. Moreover, we show how to use sequences of such states in order to obtain an unambiguous phase measurement that beats the standard quantum limit. We consider the optimization of parameters in order to minimize the final phase variance, and find that the optimum parameters are different from those that maximize the Fisher information.

  4. Combining gait optimization with passive system to increase the energy efficiency of a humanoid robot walking movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less

  5. A Framework for Multifaceted Evaluation of Student Models

    ERIC Educational Resources Information Center

    Huang, Yun; González-Brenes, José P.; Kumar, Rohit; Brusilovsky, Peter

    2015-01-01

    Latent variable models, such as the popular Knowledge Tracing method, are often used to enable adaptive tutoring systems to personalize education. However, finding optimal model parameters is usually a difficult non-convex optimization problem when considering latent variable models. Prior work has reported that latent variable models obtained…

  6. The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2017-01-01

    Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  7. Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.

    PubMed

    Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran

    2018-06-15

    Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Statistical optimization of bioprocess parameters for enhanced gallic acid production from coffee pulp tannins by Penicillium verrucosum.

    PubMed

    Bhoite, Roopali N; Navya, P N; Murthy, Pushpa S

    2013-01-01

    Gallic acid (3,4,5-trihydroxybenzoic acid) was produced by microbial biotransformation of coffee pulp tannins by Penicillium verrucosum. Gallic acid production was optimized using response surface methodology (RSM) based on central composite rotatable design. Process parameters such as pH, moisture, and fermentation period were considered for optimization. Among the various fungi isolated from coffee by-products, Penicillium verrucosum produced 35.23 µg/g of gallic acid on coffee pulp as sole carbon source in solid-state fermentation. The optimum values of the parameters obtained from the RSM were pH 3.32, moisture 58.40%, and fermentation period of 96 hr. Gallic acid production with an increase of 4.6-fold was achieved upon optimization of the process parameters. The results optimized could be translated to 1-kg tray fermentation. High-performance liquid chromatography (HPLC) analysis and spectral studies such as mass spectroscopy (MS) and (1)H-nuclear magnetic resonance (NMR) confirmed that the bioactive compound isolated was gallic acid. Thus, coffee pulp, which is available in enormous quantity, could be used for the production of value-added products that can find avenues in food, pharmaceutical, and chemical industries.

  9. Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.

    1991-01-01

    A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.

  10. Optimizing cosmological surveys in a crowded market

    NASA Astrophysics Data System (ADS)

    Bassett, Bruce A.

    2005-04-01

    Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.

  11. The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot

    NASA Astrophysics Data System (ADS)

    Hwang, Donghyeok; Tahk, Min-Jea

    2018-04-01

    The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.

  12. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  13. Genetic algorithms for multicriteria shape optimization of induction furnace

    NASA Astrophysics Data System (ADS)

    Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo

    2012-09-01

    In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.

  14. Encapsulation of brewing yeast in alginate/chitosan matrix: lab-scale optimization of lager beer fermentation.

    PubMed

    Naydenova, Vessela; Badova, Mariyana; Vassilev, Stoyan; Iliev, Vasil; Kaneva, Maria; Kostov, Georgi

    2014-03-04

    Two mathematical models were developed for studying the effect of main fermentation temperature ( T MF ), immobilized cell mass ( M IC ) and original wort extract (OE) on beer fermentation with alginate-chitosan microcapsules with a liquid core. During the experiments, the investigated parameters were varied in order to find the optimal conditions for beer fermentation with immobilized cells. The basic beer characteristics, i.e. extract, ethanol, biomass concentration, pH and colour, as well as the concentration of aldehydes and vicinal diketones, were measured. The results suggested that the process parameters represented a powerful tool in controlling the fermentation time. Subsequently, the optimized process parameters were used to produce beer in laboratory batch fermentation. The system productivity was also investigated and the data were used for the development of another mathematical model.

  15. Encapsulation of brewing yeast in alginate/chitosan matrix: lab-scale optimization of lager beer fermentation

    PubMed Central

    Naydenova, Vessela; Badova, Mariyana; Vassilev, Stoyan; Iliev, Vasil; Kaneva, Maria; Kostov, Georgi

    2014-01-01

    Two mathematical models were developed for studying the effect of main fermentation temperature (T MF), immobilized cell mass (M IC) and original wort extract (OE) on beer fermentation with alginate-chitosan microcapsules with a liquid core. During the experiments, the investigated parameters were varied in order to find the optimal conditions for beer fermentation with immobilized cells. The basic beer characteristics, i.e. extract, ethanol, biomass concentration, pH and colour, as well as the concentration of aldehydes and vicinal diketones, were measured. The results suggested that the process parameters represented a powerful tool in controlling the fermentation time. Subsequently, the optimized process parameters were used to produce beer in laboratory batch fermentation. The system productivity was also investigated and the data were used for the development of another mathematical model. PMID:26019512

  16. Optimal control of multiphoton ionization dynamics of small alkali aggregates

    NASA Astrophysics Data System (ADS)

    Lindinger, A.; Bartelt, A.; Lupulescu, C.; Vajda, S.; Woste, Ludger

    2003-11-01

    We have performed transient multi-photon ionization experiments on small alkali clusters of different size in order to probe their wave packet dynamics, structural reorientations, charge transfers and dissociative events in different vibrationally excited electronic states including their ground state. The observed processes were highly dependent on the irradiated pulse parameters like wavelength range or its phase and amplitude; an emphasis to employ a feedback control system for generating the optimum pulse shapes. Their spectral and temporal behavior reflects interesting properties about the investigated system and the irradiated photo-chemical process. First, we present the vibrational dynamics of bound electronically excited states of alkali dimers and trimers. The scheme for observing the wave packet dynamics in the electronic ground state using stimulated Raman-pumping is shown. Since the employed pulse parameters significantly influence the efficiency of the irradiated dynamic pathways photo-induced ioniziation experiments were carried out. The controllability of 3-photon ionization pathways is investigated on the model-like systems NaK and K2. A closed learning loop for adaptive feedback control is used to find the optimal fs pulse shape. Sinusoidal parameterizations of the spectral phase modulation are investigated in regard to the obtained optimal field. By reducing the number of parameters and thereby the complexity of the phase moduation, optimal pulse shapes can be generated that carry fingerprints of the molecule's dynamical properties. This enables to find "understandable" optimal pulse forms and offers the possiblity to gain insight into the photo-induced control process. Characteristic motions of the involved wave packets are proposed to explain the optimized dynamic dissociation pathways.

  17. Optimal experimental design for assessment of enzyme kinetics in a drug discovery screening environment.

    PubMed

    Sjögren, Erik; Nyberg, Joakim; Magnusson, Mats O; Lennernäs, Hans; Hooker, Andrew; Bredberg, Ulf

    2011-05-01

    A penalized expectation of determinant (ED)-optimal design with a discrete parameter distribution was used to find an optimal experimental design for assessment of enzyme kinetics in a screening environment. A data set for enzyme kinetic data (V(max) and K(m)) was collected from previously reported studies, and every V(max)/K(m) pair (n = 76) was taken to represent a unique drug compound. The design was restricted to 15 samples, an incubation time of up to 40 min, and starting concentrations (C(0)) for the incubation between 0.01 and 100 μM. The optimization was performed by finding the sample times and C(0) returning the lowest uncertainty (S.E.) of the model parameter estimates. Individual optimal designs, one general optimal design and one, for laboratory practice suitable, pragmatic optimal design (OD) were obtained. In addition, a standard design (STD-D), representing a commonly applied approach for metabolic stability investigations, was constructed. Simulations were performed for OD and STD-D by using the Michaelis-Menten (MM) equation, and enzyme kinetic parameters were estimated with both MM and a monoexponential decay. OD generated a better result (relative standard error) for 99% of the compounds and an equal or better result [(root mean square error (RMSE)] for 78% of the compounds in estimation of metabolic intrinsic clearance. Furthermore, high-quality estimates (RMSE < 30%) of both V(max) and K(m) could be obtained for a considerable number (26%) of the investigated compounds by using the suggested OD. The results presented in this study demonstrate that the output could generally be improved compared with that obtained from the standard approaches used today.

  18. Inverse modeling of rainfall infiltration with a dual permeability approach using different matrix-fracture coupling variants.

    NASA Astrophysics Data System (ADS)

    Blöcher, Johanna; Kuraz, Michal

    2017-04-01

    In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.

  19. A preliminary study to metaheuristic approach in multilayer radiation shielding optimization

    NASA Astrophysics Data System (ADS)

    Arif Sazali, Muhammad; Rashid, Nahrul Khair Alang Md; Hamzah, Khaidzir

    2018-01-01

    Metaheuristics are high-level algorithmic concepts that can be used to develop heuristic optimization algorithms. One of their applications is to find optimal or near optimal solutions to combinatorial optimization problems (COPs) such as scheduling, vehicle routing, and timetabling. Combinatorial optimization deals with finding optimal combinations or permutations in a given set of problem components when exhaustive search is not feasible. A radiation shield made of several layers of different materials can be regarded as a COP. The time taken to optimize the shield may be too high when several parameters are involved such as the number of materials, the thickness of layers, and the arrangement of materials. Metaheuristics can be applied to reduce the optimization time, trading guaranteed optimal solutions for near-optimal solutions in comparably short amount of time. The application of metaheuristics for radiation shield optimization is lacking. In this paper, we present a review on the suitability of using metaheuristics in multilayer shielding design, specifically the genetic algorithm and ant colony optimization algorithm (ACO). We would also like to propose an optimization model based on the ACO method.

  20. Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul

    2005-01-01

    An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.

  1. Reducing the uncertainty of parameters controlling seasonal carbon and water fluxes in Chinese forests and its implication for simulated climate sensitivities

    NASA Astrophysics Data System (ADS)

    Li, Yue; Yang, Hui; Wang, Tao; MacBean, Natasha; Bacour, Cédric; Ciais, Philippe; Zhang, Yiping; Zhou, Guangsheng; Piao, Shilong

    2017-08-01

    Reducing parameter uncertainty of process-based terrestrial ecosystem models (TEMs) is one of the primary targets for accurately estimating carbon budgets and predicting ecosystem responses to climate change. However, parameters in TEMs are rarely constrained by observations from Chinese forest ecosystems, which are important carbon sink over the northern hemispheric land. In this study, eddy covariance data from six forest sites in China are used to optimize parameters of the ORganizing Carbon and Hydrology In Dynamics EcosystEms TEM. The model-data assimilation through parameter optimization largely reduces the prior model errors and improves the simulated seasonal cycle and summer diurnal cycle of net ecosystem exchange, latent heat fluxes, and gross primary production and ecosystem respiration. Climate change experiments based on the optimized model are deployed to indicate that forest net primary production (NPP) is suppressed in response to warming in the southern China but stimulated in the northeastern China. Altered precipitation has an asymmetric impact on forest NPP at sites in water-limited regions, with the optimization-induced reduction in response of NPP to precipitation decline being as large as 61% at a deciduous broadleaf forest site. We find that seasonal optimization alters forest carbon cycle responses to environmental change, with the parameter optimization consistently reducing the simulated positive response of heterotrophic respiration to warming. Evaluations from independent observations suggest that improving model structure still matters most for long-term carbon stock and its changes, in particular, nutrient- and age-related changes of photosynthetic rates, carbon allocation, and tree mortality.

  2. Adaptive photoacoustic imaging quality optimization with EMD and reconstruction

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.

    2016-10-01

    Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.

  3. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    PubMed

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  4. [The utility boiler low NOx combustion optimization based on ANN and simulated annealing algorithm].

    PubMed

    Zhou, Hao; Qian, Xinping; Zheng, Ligang; Weng, Anxin; Cen, Kefa

    2003-11-01

    With the developing restrict environmental protection demand, more attention was paid on the low NOx combustion optimizing technology for its cheap and easy property. In this work, field experiments on the NOx emissions characteristics of a 600 MW coal-fired boiler were carried out, on the base of the artificial neural network (ANN) modeling, the simulated annealing (SA) algorithm was employed to optimize the boiler combustion to achieve a low NOx emissions concentration, and the combustion scheme was obtained. Two sets of SA parameters were adopted to find a better SA scheme, the result show that the parameters of T0 = 50 K, alpha = 0.6 can lead to a better optimizing process. This work can give the foundation of the boiler low NOx combustion on-line control technology.

  5. Stepped MS(All) Relied Transition (SMART): An approach to rapidly determine optimal multiple reaction monitoring mass spectrometry parameters for small molecules.

    PubMed

    Ye, Hui; Zhu, Lin; Wang, Lin; Liu, Huiying; Zhang, Jun; Wu, Mengqiu; Wang, Guangji; Hao, Haiping

    2016-02-11

    Multiple reaction monitoring (MRM) is a universal approach for quantitative analysis because of its high specificity and sensitivity. Nevertheless, optimization of MRM parameters remains as a time and labor-intensive task particularly in multiplexed quantitative analysis of small molecules in complex mixtures. In this study, we have developed an approach named Stepped MS(All) Relied Transition (SMART) to predict the optimal MRM parameters of small molecules. SMART requires firstly a rapid and high-throughput analysis of samples using a Stepped MS(All) technique (sMS(All)) on a Q-TOF, which consists of serial MS(All) events acquired from low CE to gradually stepped-up CE values in a cycle. The optimal CE values can then be determined by comparing the extracted ion chromatograms for the ion pairs of interest among serial scans. The SMART-predicted parameters were found to agree well with the parameters optimized on a triple quadrupole from the same vendor using a mixture of standards. The parameters optimized on a triple quadrupole from a different vendor was also employed for comparison, and found to be linearly correlated with the SMART-predicted parameters, suggesting the potential applications of the SMART approach among different instrumental platforms. This approach was further validated by applying to simultaneous quantification of 31 herbal components in the plasma of rats treated with a herbal prescription. Because the sMS(All) acquisition can be accomplished in a single run for multiple components independent of standards, the SMART approach are expected to find its wide application in the multiplexed quantitative analysis of complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Fitting Nonlinear Curves by use of Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Hill, Scott A.

    2005-01-01

    MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.

  7. Methodology of Numerical Optimization for Orbital Parameters of Binary Systems

    NASA Astrophysics Data System (ADS)

    Araya, I.; Curé, M.

    2010-02-01

    The use of a numerical method of maximization (or minimization) in optimization processes allows us to obtain a great amount of solutions. Therefore, we can find a global maximum or minimum of the problem, but this is only possible if we used a suitable methodology. To obtain the global optimum values, we use the genetic algorithm called PIKAIA (P. Charbonneau) and other four algorithms implemented in Mathematica. We demonstrate that derived orbital parameters of binary systems published in some papers, based on radial velocity measurements, are local minimum instead of global ones.

  8. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    PubMed Central

    2010-01-01

    Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Conclusions Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available. PMID:21050468

  9. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    PubMed

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available.

  10. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  11. Architecture and settings optimization procedure of a TES frequency domain multiplexed readout firmware

    NASA Astrophysics Data System (ADS)

    Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.

    2014-11-01

    IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.

  12. Mice take calculated risks.

    PubMed

    Kheifets, Aaron; Gallistel, C R

    2012-05-29

    Animals successfully navigate the world despite having only incomplete information about behaviorally important contingencies. It is an open question to what degree this behavior is driven by estimates of stochastic parameters (brain-constructed models of the experienced world) and to what degree it is directed by reinforcement-driven processes that optimize behavior in the limit without estimating stochastic parameters (model-free adaptation processes, such as associative learning). We find that mice adjust their behavior in response to a change in probability more quickly and abruptly than can be explained by differential reinforcement. Our results imply that mice represent probabilities and perform calculations over them to optimize their behavior, even when the optimization produces negligible material gain.

  13. Mice take calculated risks

    PubMed Central

    Kheifets, Aaron; Gallistel, C. R.

    2012-01-01

    Animals successfully navigate the world despite having only incomplete information about behaviorally important contingencies. It is an open question to what degree this behavior is driven by estimates of stochastic parameters (brain-constructed models of the experienced world) and to what degree it is directed by reinforcement-driven processes that optimize behavior in the limit without estimating stochastic parameters (model-free adaptation processes, such as associative learning). We find that mice adjust their behavior in response to a change in probability more quickly and abruptly than can be explained by differential reinforcement. Our results imply that mice represent probabilities and perform calculations over them to optimize their behavior, even when the optimization produces negligible material gain. PMID:22592792

  14. Optimal resource diffusion for suppressing disease spreading in multiplex networks

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolong; Wang, Wei; Cai, Shimin; Stanley, H. Eugene; Braunstein, Lidia A.

    2018-05-01

    Resource diffusion is a ubiquitous phenomenon, but how it impacts epidemic spreading has received little study. We propose a model that couples epidemic spreading and resource diffusion in multiplex networks. The spread of disease in a physical contact layer and the recovery of the infected nodes are both strongly dependent upon resources supplied by their counterparts in the social layer. The generation and diffusion of resources in the social layer are in turn strongly dependent upon the state of the nodes in the physical contact layer. Resources diffuse preferentially or randomly in this model. To quantify the degree of preferential diffusion, a bias parameter that controls the resource diffusion is proposed. We conduct extensive simulations and find that the preferential resource diffusion can change phase transition type of the fraction of infected nodes. When the degree of interlayer correlation is below a critical value, increasing the bias parameter changes the phase transition from double continuous to single continuous. When the degree of interlayer correlation is above a critical value, the phase transition changes from multiple continuous to first discontinuous and then to hybrid. We find hysteresis loops in the phase transition. We also find that there is an optimal resource strategy at each fixed degree of interlayer correlation under which the threshold reaches a maximum and the disease can be maximally suppressed. In addition, the optimal controlling parameter increases as the degree of inter-layer correlation increases.

  15. Model predictive control of attitude maneuver of a geostationary flexible satellite based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    TayyebTaher, M.; Esmaeilzadeh, S. Majid

    2017-07-01

    This article presents an application of Model Predictive Controller (MPC) to the attitude control of a geostationary flexible satellite. SIMO model has been used for the geostationary satellite, using the Lagrange equations. Flexibility is also included in the modelling equations. The state space equations are expressed in order to simplify the controller. Naturally there is no specific tuning rule to find the best parameters of an MPC controller which fits the desired controller. Being an intelligence method for optimizing problem, Genetic Algorithm has been used for optimizing the performance of MPC controller by tuning the controller parameter due to minimum rise time, settling time, overshoot of the target point of the flexible structure and its mode shape amplitudes to make large attitude maneuvers possible. The model included geosynchronous orbit environment and geostationary satellite parameters. The simulation results of the flexible satellite with attitude maneuver shows the efficiency of proposed optimization method in comparison with LQR optimal controller.

  16. Estimating stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping

    2016-09-01

    Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.

  17. Maximum life spiral bevel reduction design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-01-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  18. An improved hybrid of particle swarm optimization and the gravitational search algorithm to produce a kinetic parameter estimation of aspartate biochemical pathways.

    PubMed

    Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal

    2017-12-01

    Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Optimization to the Culture Conditions for Phellinus Production with Regression Analysis and Gene-Set Based Genetic Algorithm

    PubMed Central

    Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui

    2016-01-01

    Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365

  20. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  1. Optimal Parameters for Intervertebral Disk Resection Using Aqua-Plasma Beams.

    PubMed

    Yoon, Sung-Young; Kim, Gon-Ho; Kim, Yushin; Kim, Nack Hwan; Lee, Sangheon; Kawai, Christina; Hong, Youngki

    2018-06-14

     A minimally invasive procedure for intervertebral disk resection using plasma beams has been developed. Conventional parameters for the plasma procedure such as voltage and tip speed mainly rely on the surgeon's personal experience, without adequate evidence from experiments. Our objective was to determine the optimal parameters for plasma disk resection.  Rate of ablation was measured at different procedural tip speeds and voltages using porcine nucleus pulposi. The amount of heat formation during experimental conditions was also measured to evaluate the thermal safety of the plasma procedure.  The ablation rate increased at slower procedural speeds and higher voltages. However, for thermal safety, the optimal parameters for plasma procedures with minimal tissue damage were an electrical output of 280 volts root-mean-square (V rms ) and a procedural tip speed of 2.5 mm/s.  Our findings provide useful information for an effective and safe plasma procedure for disk resection in a clinical setting. Georg Thieme Verlag KG Stuttgart · New York.

  2. Parameter Analysis of the VPIN (Volume synchronized Probability of Informed Trading) Metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jung Heon; Wu, Kesheng; Simon, Horst D.

    2014-03-01

    VPIN (Volume synchronized Probability of Informed trading) is a leading indicator of liquidity-induced volatility. It is best known for having produced a signal more than hours before the Flash Crash of 2010. On that day, the market saw the biggest one-day point decline in the Dow Jones Industrial Average, which culminated to the market value of $1 trillion disappearing, but only to recover those losses twenty minutes later (Lauricella 2010). The computation of VPIN requires the user to set up a handful of free parameters. The values of these parameters significantly affect the effectiveness of VPIN as measured by themore » false positive rate (FPR). An earlier publication reported that a brute-force search of simple parameter combinations yielded a number of parameter combinations with FPR of 7%. This work is a systematic attempt to find an optimal parameter set using an optimization package, NOMAD (Nonlinear Optimization by Mesh Adaptive Direct Search) by Audet, le digabel, and tribes (2009) and le digabel (2011). We have implemented a number of techniques to reduce the computation time with NOMAD. Tests show that we can reduce the FPR to only 2%. To better understand the parameter choices, we have conducted a series of sensitivity analysis via uncertainty quantification on the parameter spaces using UQTK (Uncertainty Quantification Toolkit). Results have shown dominance of 2 parameters in the computation of FPR. Using the outputs from NOMAD optimization and sensitivity analysis, We recommend A range of values for each of the free parameters that perform well on a large set of futures trading records.« less

  3. Optimal growth trajectories with finite carrying capacity.

    PubMed

    Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  4. Optimal growth trajectories with finite carrying capacity

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  5. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  6. Preparation of pure chitosan film using ternary solvents and its super absorbency.

    PubMed

    Wang, Xuejun; Lou, Tao; Zhao, Wenhua; Song, Guojun

    2016-11-20

    Chemical modification and graft copolymerization were commonly adopted to prepare super absorbent materials. However, physical microstructure of pure chitosan film was optimized to improve the water uptake capacity in this study. Chitosan films with micro-nanostructure were prepared by a ternary solvent system. The optimal process parameters are 1% acetic acid water solution: dioxane: dimethyl sulfoxide=90: 2.5: 7.5 (v/v/v) with chitosan concentration at 1.25% (w/v). The water uptake capacity of the chitosan film prepared under the optimal process parameters was 896g/g. The prepared chitosan films also exhibited high water uptake capacity in response to external stimuli such as temperature, pH and salt. This finding may provide another way for improving the water absorbency. The pure chitosan film may find potential applications especially in the fields of hygienic products and biomedicine due to its super water absorbency and nontoxicity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  8. The trade-off between morphology and control in the co-optimized design of robots.

    PubMed

    Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.

  9. The trade-off between morphology and control in the co-optimized design of robots

    PubMed Central

    Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482

  10. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  11. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  12. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  13. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    NASA Astrophysics Data System (ADS)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.

  14. A method to optimize the processing algorithm of a computed radiography system for chest radiography.

    PubMed

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2007-09-01

    A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.

  15. Multiobjective robust design of the double wishbone suspension system based on particle swarm optimization.

    PubMed

    Cheng, Xianfu; Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  16. Optimization of structures undergoing harmonic or stochastic excitation. Ph.D. Thesis; [atmospheric turbulence and white noise

    NASA Technical Reports Server (NTRS)

    Johnson, E. H.

    1975-01-01

    The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.

  17. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  18. Spectral gap optimization of order parameters for sampling complex molecular systems

    PubMed Central

    Tiwary, Pratyush; Berne, B. J.

    2016-01-01

    In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365

  19. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  20. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  1. Performance and robustness of optimal fractional fuzzy PID controllers for pitch control of a wind turbine using chaotic optimization algorithms.

    PubMed

    Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali

    2018-05-11

    The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Optimizing snake locomotion on an inclined plane

    NASA Astrophysics Data System (ADS)

    Wang, Xiaolin; Osborne, Matthew T.; Alben, Silas

    2014-01-01

    We develop a model to study the locomotion of snakes on inclined planes. We determine numerically which snake motions are optimal for two retrograde traveling-wave body shapes, triangular and sinusoidal waves, across a wide range of frictional parameters and incline angles. In the regime of large transverse friction coefficients, we find power-law scalings for the optimal wave amplitudes and corresponding costs of locomotion. We give an asymptotic analysis to show that the optimal snake motions are traveling waves with amplitudes given by the same scaling laws found in the numerics.

  3. Self-calibration of a noisy multiple-sensor system with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Brooks, Richard R.; Iyengar, S. Sitharama; Chen, Jianhua

    1996-01-01

    This paper explores an image processing application of optimization techniques which entails interpreting noisy sensor data. The application is a generalization of image correlation; we attempt to find the optimal gruence which matches two overlapping gray-scale images corrupted with noise. Both taboo search and genetic algorithms are used to find the parameters which match the two images. A genetic algorithm approach using an elitist reproduction scheme is found to provide significantly superior results. The presentation includes a graphic presentation of the paths taken by tabu search and genetic algorithms when trying to find the best possible match between two corrupted images.

  4. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  5. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  6. Modelling and Optimization of Polycaprolactone Ultrafine-Fibres Electrospinning Process Using Response Surface Methodology

    PubMed Central

    Ruys, Andrew J.

    2018-01-01

    Electrospun fibres have gained broad interest in biomedical applications, including tissue engineering scaffolds, due to their potential in mimicking extracellular matrix and producing structures favourable for cell and tissue growth. The development of scaffolds often involves multivariate production parameters and multiple output characteristics to define product quality. In this study on electrospinning of polycaprolactone (PCL), response surface methodology (RSM) was applied to investigate the determining parameters and find optimal settings to achieve the desired properties of fibrous scaffold for acetabular labrum implant. The results showed that solution concentration influenced fibre diameter, while elastic modulus was determined by solution concentration, flow rate, temperature, collector rotation speed, and interaction between concentration and temperature. Relationships between these variables and outputs were modelled, followed by an optimization procedure. Using the optimized setting (solution concentration of 10% w/v, flow rate of 4.5 mL/h, temperature of 45 °C, and collector rotation speed of 1500 RPM), a target elastic modulus of 25 MPa could be achieved at a minimum possible fibre diameter (1.39 ± 0.20 µm). This work demonstrated that multivariate factors of production parameters and multiple responses can be investigated, modelled, and optimized using RSM. PMID:29562614

  7. Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.

  8. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  9. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  10. Combining local search with co-evolution in a remarkably simple way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.

    2000-05-01

    The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less

  11. A method for generating reduced-order combustion mechanisms that satisfy the differential entropy inequality

    NASA Astrophysics Data System (ADS)

    Ream, Allen E.; Slattery, John C.; Cizmas, Paul G. A.

    2018-04-01

    This paper presents a new method for determining the Arrhenius parameters of a reduced chemical mechanism such that it satisfies the second law of thermodynamics. The strategy is to approximate the progress of each reaction in the reduced mechanism from the species production rates of a detailed mechanism by using a linear least squares method. A series of non-linear least squares curve fittings are then carried out to find the optimal Arrhenius parameters for each reaction. At this step, the molar rates of production are written such that they comply with a theorem that provides the sufficient conditions for satisfying the second law of thermodynamics. This methodology was used to modify the Arrhenius parameters for the Westbrook and Dryer two-step mechanism and the Peters and Williams three-step mechanism for methane combustion. Both optimized mechanisms showed good agreement with the detailed mechanism for species mole fractions and production rates of most major species. Both optimized mechanisms showed significant improvement over previous mechanisms in minor species production rate prediction. Both optimized mechanisms produced no violations of the second law of thermodynamics.

  12. A Comparative Analysis of Taguchi Methodology and Shainin System DoE in the Optimization of Injection Molding Process Parameters

    NASA Astrophysics Data System (ADS)

    Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik

    2017-08-01

    Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.

  13. Quantum Optimization of Fully Connected Spin Glasses

    NASA Astrophysics Data System (ADS)

    Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim

    2015-07-01

    Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.

  14. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  15. Optimization of dose and image quality in adult and pediatric computed tomography scans

    NASA Astrophysics Data System (ADS)

    Chang, Kwo-Ping; Hsu, Tzu-Kun; Lin, Wei-Ting; Hsu, Wen-Lin

    2017-11-01

    Exploration to maximize CT image and reduce radiation dose was conducted while controlling for multiple factors. The kVp, mAs, and iteration reconstruction (IR), affect the CT image quality and radiation dose absorbed. The optimal protocols (kVp, mAs, IR) are derived by figure of merit (FOM) based on CT image quality (CNR) and CT dose index (CTDIvol). CT image quality metrics such as CT number accuracy, SNR, low contrast materials' CNR and line pair resolution were also analyzed as auxiliary assessments. CT protocols were carried out with an ACR accreditation phantom and a five-year-old pediatric head phantom. The threshold values of the adult CT scan parameters, 100 kVp and 150 mAs, were determined from the CT number test and line pairs in ACR phantom module 1and module 4 respectively. The findings of this study suggest that the optimal scanning parameters for adults be set at 100 kVp and 150-250 mAs. However, for improved low- contrast resolution, 120 kVp and 150-250 mAs are optimal. Optimal settings for pediatric head CT scan were 80 kVp/50 mAs, for maxillary sinus and brain stem, while 80 kVp /300 mAs for temporal bone. SNR is not reliable as the independent image parameter nor the metric for determining optimal CT scan parameters. The iteration reconstruction (IR) approach is strongly recommended for both adult and pediatric CT scanning as it markedly improves image quality without affecting radiation dose.

  16. Posterior uncertainty of GEOS-5 L-band radiative transfer model parameters and brightness temperatures after calibration with SMOS observations

    NASA Astrophysics Data System (ADS)

    De Lannoy, G. J.; Reichle, R. H.; Vrugt, J. A.

    2012-12-01

    Simulated L-band (1.4 GHz) brightness temperatures are very sensitive to the values of the parameters in the radiative transfer model (RTM). We assess the optimum RTM parameter values and their (posterior) uncertainty in the Goddard Earth Observing System (GEOS-5) land surface model using observations of multi-angular brightness temperature over North America from the Soil Moisture Ocean Salinity (SMOS) mission. Two different parameter estimation methods are being compared: (i) a particle swarm optimization (PSO) approach, and (ii) an MCMC simulation procedure using the differential evolution adaptive Metropolis (DREAM) algorithm. Our results demonstrate that both methods provide similar "optimal" parameter values. Yet, DREAM exhibits better convergence properties, resulting in a reduced spread of the posterior ensemble. The posterior parameter distributions derived with both methods are used for predictive uncertainty estimation of brightness temperature. This presentation will highlight our model-data synthesis framework and summarize our initial findings.

  17. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  18. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications.

    PubMed

    Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.

  19. An improved chaotic fruit fly optimization based on a mutation strategy for simultaneous feature selection and parameter optimization for SVM and its applications

    PubMed Central

    Lou, Xin Yuan; Sun, Lin Fu

    2017-01-01

    This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096

  20. Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery

    NASA Astrophysics Data System (ADS)

    Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin

    2013-06-01

    In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".

  1. Multi-objective optimization of piezoelectric circuitry network for mode delocalization and suppression of bladed disk

    NASA Astrophysics Data System (ADS)

    Yoo, David; Tang, J.

    2017-04-01

    Since weakly-coupled bladed disks are highly sensitive to the presence of uncertainties, they can easily undergo vibration localization. When vibration localization occurs, vibration modes of bladed disk become dramatically different from those under the perfectly periodic condition, and the dynamic response under engine-order excitation is drastically amplified. In previous studies, it is investigated that amplified vibration response can be suppressed by connecting piezoelectric circuitry into individual blades to induce the damped absorber effect, and localized vibration modes can be alleviated by integrating piezoelectric circuitry network. Delocalization of vibration modes and vibration suppression of bladed disk, however, require different optimal set of circuit parameters. In this research, multi-objective optimization approach is developed to enable finding the best circuit parameters, simultaneously achieving both objectives. In this way, the robustness and reliability in bladed disk can be ensured. Gradient-based optimizations are individually developed for mode delocalization and vibration suppression, which are then integrated into multi-objective optimization framework.

  2. Quantum cryptography: Security criteria reexamined

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaszlikowski, Dagomir; Liang, Y.C.; Englert, Berthold-Georg

    2004-09-01

    We find that the generally accepted security criteria are flawed for a whole class of protocols for quantum cryptography. This is so because a standard assumption of the security analysis, namely that the so-called square-root measurement is optimal for eavesdropping purposes, is not true in general. There are rather large parameter regimes in which the optimal measurement extracts substantially more information than the square-root measurement.

  3. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  4. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarepisheh, M; Li, R; Xing, L

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less

  5. Optimization of reactive-ion etching (RIE) parameters for fabrication of tantalum pentoxide (Ta2O5) waveguide using Taguchi method

    NASA Astrophysics Data System (ADS)

    Muttalib, M. Firdaus A.; Chen, Ruiqi Y.; Pearce, S. J.; Charlton, Martin D. B.

    2017-11-01

    In this paper, we demonstrate the optimization of reactive-ion etching (RIE) parameters for the fabrication of tantalum pentoxide (Ta2O5) waveguide with chromium (Cr) hard mask in a commercial OIPT Plasmalab 80 RIE etcher. A design of experiment (DOE) using Taguchi method was implemented to find optimum RF power, mixture of CHF3 and Ar gas ratio, and chamber pressure for a high etch rate, good selectivity, and smooth waveguide sidewall. It was found that the optimized etch condition obtained in this work were RF power = 200 W, gas ratio = 80 %, and chamber pressure = 30 mTorr with an etch rate of 21.6 nm/min, Ta2O5/Cr selectivity ratio of 28, and smooth waveguide sidewall.

  6. Comparison of empirical strategies to maximize GENEHUNTER lod scores.

    PubMed

    Chen, C H; Finch, S J; Mendell, N R; Gordon, D

    1999-01-01

    We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.

  7. Microcavity morphology optimization

    NASA Astrophysics Data System (ADS)

    Ferdous, Fahmida; Demchenko, Alena A.; Vyatchanin, Sergey P.; Matsko, Andrey B.; Maleki, Lute

    2014-09-01

    High spectral mode density of conventional optical cavities is detrimental to the generation of broad optical frequency combs and to other linear and nonlinear applications. In this work we optimize the morphology of high-Q whispering gallery (WG) and Fabry-Perot (FP) cavities and find a set of parameters that allows treating them, essentially, as single-mode structures, thus removing limitations associated with a high density of cavity mode spectra. We show that both single-mode WGs and single-mode FP cavities have similar physical properties, in spite of their different loss mechanisms. The morphology optimization does not lead to a reduction of quality factors of modes belonging to the basic family. We study the parameter space numerically and find the region where the highest possible Q factor of the cavity modes can be realized while just having a single bound state in the cavity. The value of the Q factor is comparable with that achieved in conventional cavities. The proposed cavity structures will be beneficial for generation of octave spanning coherent frequency combs and will prevent undesirable effects of parametric instability in laser gravitational wave detectors.

  8. Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Rogers, Adam; Safi-Harb, Samar; Fiege, Jason

    2015-08-01

    The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.

  9. Correction method for stripe nonuniformity.

    PubMed

    Qian, Weixian; Chen, Qian; Gu, Guohua; Guan, Zhiqiang

    2010-04-01

    Stripe nonuniformity is very typical in line infrared focal plane arrays (IR-FPA) and uncooled staring IR-FPA. In this paper, the mechanism of the stripe nonuniformity is analyzed, and the gray-scale co-occurrence matrix theory and optimization theory are studied. Through these efforts, the stripe nonuniformity correction problem is translated into the optimization problem. The goal of the optimization is to find the minimal energy of the image's line gradient. After solving the constrained nonlinear optimization equation, the parameters of the stripe nonuniformity correction are obtained and the stripe nonuniformity correction is achieved. The experiments indicate that this algorithm is effective and efficient.

  10. Information filtering via a scaling-based function.

    PubMed

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.

  11. Optimization of an auto-thermal ammonia synthesis reactor using cyclic coordinate method

    NASA Astrophysics Data System (ADS)

    A-N Nguyen, T.; Nguyen, T.-A.; Vu, T.-D.; Nguyen, K.-T.; K-T Dao, T.; P-H Huynh, K.

    2017-06-01

    The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone, the initial nitrogen proportion. The optimal problem requires the maximization of an objective function which is multivariable function and subject to a number of equality constraints involving the solution of coupled differential equations and also inequality constraint. The cyclic coordinate search was applied to solve the multivariable-optimization problem. In each coordinate, the golden section method was applied to find the maximum value. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results obtained from this study are also compared to the results from the literature.

  12. Multiobjective optimization in structural design with uncertain parameters and stochastic processes

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.

  13. Bifurcation analysis of eight coupled degenerate optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Ito, Daisuke; Ueta, Tetsushi; Aihara, Kazuyuki

    2018-06-01

    A degenerate optical parametric oscillator (DOPO) network realized as a coherent Ising machine can be used to solve combinatorial optimization problems. Both theoretical and experimental investigations into the performance of DOPO networks have been presented previously. However a problem remains, namely that the dynamics of the DOPO network itself can lower the search success rates of globally optimal solutions for Ising problems. This paper shows that the problem is caused by pitchfork bifurcations due to the symmetry structure of coupled DOPOs. Some two-parameter bifurcation diagrams of equilibrium points express the performance deterioration. It is shown that the emergence of non-ground states regarding local minima hampers the system from reaching the ground states corresponding to the global minimum. We then describe a parametric strategy for leading a system to the ground state by actively utilizing the bifurcation phenomena. By adjusting the parameters to break particular symmetry, we find appropriate parameter sets that allow the coherent Ising machine to obtain the globally optimal solution alone.

  14. Analysis of Modeling Parameters on Threaded Screws.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. Themore » results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.« less

  15. Modeling and Experiment of Melt Impregnation of Continuous Fiber-reinforced Thermoplastic with Pins

    NASA Astrophysics Data System (ADS)

    Yang, Jian-Jun; Xin, Chun-Ling; Tang, Ke; Zhang, Zhi-Cheng; Yan, Bao-Rui; Ren, Feng; He, Ya-Dong

    2016-05-01

    Melt impregnation is a crucial method for continuous fiber-reinforced thermoplastic. It was developed several years ago for thermosetting plastic, but it is very popular now in the thermoplastic matrices, with a much higher viscosity. In this paper, we propose a mathematic model based on Darcy's law, which combined with processing parameters and material physical parameters. Then we use this model to predict the influence of processing parameters on the degree of impregnation of the prepreg, and the trend of prediction is consistent with the experimental results. Therefore, the exhaustive numerical study enables to define the optimal processing conditions for a perfect impregnation. The results are shown to be effective tools for finding optimal pulling speed, pin number and pressure for a given fluid/fibers pair.

  16. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  17. Route towards cylindrical cloaking at visible frequencies using an optimization algorithm

    NASA Astrophysics Data System (ADS)

    Rottler, Andreas; Krüger, Benjamin; Heitmann, Detlef; Pfannkuche, Daniela; Mendach, Stefan

    2012-12-01

    We derive a model based on the Maxwell-Garnett effective-medium theory that describes a cylindrical cloaking shell composed of metal rods which are radially aligned in a dielectric host medium. We propose and demonstrate a minimization algorithm that calculates for given material parameters the optimal geometrical parameters of the cloaking shell such that its effective optical parameters fit the best to the required permittivity distribution for cylindrical cloaking. By means of sophisticated full-wave simulations we find that a cylindrical cloak with good performance using silver as the metal can be designed with our algorithm for wavelengths in the red part of the visible spectrum (623nm <λ<773nm). We also present a full-wave simulation of such a cloak at an exemplary wavelength of λ=729nm (ℏω=1.7eV) which indicates that our model is useful to find design rules of cloaks with good cloaking performance. Our calculations investigate a structure that is easy to fabricate using standard preparation techniques and therefore pave the way to a realization of guiding light around an object at visible frequencies, thus rendering it invisible.

  18. Multi-Response Optimization of WEDM Process Parameters Using Taguchi Based Desirability Function Analysis

    NASA Astrophysics Data System (ADS)

    Majumder, Himadri; Maity, Kalipada

    2018-03-01

    Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.

  19. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  20. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    NASA Astrophysics Data System (ADS)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  1. Linear theory for filtering nonlinear multiscale systems with model error

    PubMed Central

    Berry, Tyrus; Harlim, John

    2014-01-01

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829

  2. The value of compressed air energy storage in energy and reserve markets

    DOE PAGES

    Drury, Easan; Denholm, Paul; Sioshansi, Ramteen

    2011-06-28

    Storage devices can provide several grid services, however it is challenging to quantify the value of providing several services and to optimally allocate storage resources to maximize value. We develop a co-optimized Compressed Air Energy Storage (CAES) dispatch model to characterize the value of providing operating reserves in addition to energy arbitrage in several U.S. markets. We use the model to: (1) quantify the added value of providing operating reserves in addition to energy arbitrage; (2) evaluate the dynamic nature of optimally allocating storage resources into energy and reserve markets; and (3) quantify the sensitivity of CAES net revenues tomore » several design and performance parameters. We find that conventional CAES systems could earn an additional 23 ± 10/kW-yr by providing operating reserves, and adiabatic CAES systems could earn an additional 28 ± 13/kW-yr. We find that arbitrage-only revenues are unlikely to support a CAES investment in most market locations, but the addition of reserve revenues could support a conventional CAES investment in several markets. Adiabatic CAES revenues are not likely to support an investment in most regions studied. As a result, modifying CAES design and performance parameters primarily impacts arbitrage revenues, and optimizing CAES design will be nearly independent of dispatch strategy.« less

  3. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    NASA Astrophysics Data System (ADS)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  4. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  5. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  6. Optimal Futility Interim Design: A Predictive Probability of Success Approach with Time-to-Event Endpoint.

    PubMed

    Tang, Zhongwen

    2015-01-01

    An analytical way to compute predictive probability of success (PPOS) together with credible interval at interim analysis (IA) is developed for big clinical trials with time-to-event endpoints. The method takes account of the fixed data up to IA, the amount of uncertainty in future data, and uncertainty about parameters. Predictive power is a special type of PPOS. The result is confirmed by simulation. An optimal design is proposed by finding optimal combination of analysis time and futility cutoff based on some PPOS criteria.

  7. Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model

    NASA Astrophysics Data System (ADS)

    Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr

    2017-10-01

    Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.

  8. Continued research on selected parameters to minimize community annoyance from airplane noise

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1981-01-01

    Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.

  9. Skin-electrode circuit model for use in optimizing energy transfer in volume conduction systems.

    PubMed

    Hackworth, Steven A; Sun, Mingui; Sclabassi, Robert J

    2009-01-01

    The X-Delta model for through-skin volume conduction systems is introduced and analyzed. This new model has advantages over our previous X model in that it explicitly represents current pathways in the skin. A vector network analyzer is used to take measurements on pig skin to obtain data for use in finding the model's impedance parameters. An optimization method for obtaining this more complex model's parameters is described. Results show the model to accurately represent the impedance behavior of the skin system with error of generally less than one percent. Uses for the model include optimizing energy transfer across the skin in a volume conduction system with appropriate current exposure constraints, and exploring non-linear behavior of the electrode-skin system at moderate voltages (below ten) and frequencies (kilohertz to megahertz).

  10. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  11. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  12. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  13. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  14. Optimization of kinetic parameters for the degradation of plasmid DNA in rat plasma

    NASA Astrophysics Data System (ADS)

    Chaudhry, Q. A.

    2014-12-01

    Biotechnology is a rapidly growing area of research work in the field of pharmaceutical sciences. The study of pharmacokinetics of plasmid DNA (pDNA) is an important area of research work. It has been observed that the process of gene delivery faces many troubles on the transport of pDNA towards their target sites. The topoforms of pDNA has been termed as super coiled (S-C), open circular (O-C) and linear (L), the kinetic model of which will be presented in this paper. The kinetic model gives rise to system of ordinary differential equations (ODEs), the exact solution of which has been found. The kinetic parameters, which are responsible for the degradation of super coiled, and the formation of open circular and linear topoforms have a great significance not only in vitro but for modeling of further processes as well, therefore need to be addressed in great detail. For this purpose, global optimization techniques have been adopted, thus finding the optimal results for the said model. The results of the model, while using the optimal parameters, were compared against the measured data, which gives a nice agreement.

  15. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    NASA Astrophysics Data System (ADS)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  16. Prediction of acoustic feature parameters using myoelectric signals.

    PubMed

    Lee, Ki-Seung

    2010-07-01

    It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.

  17. Optimal control of information epidemics modeled as Maki Thompson rumors

    NASA Astrophysics Data System (ADS)

    Kandhway, Kundan; Kuri, Joy

    2014-12-01

    We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.

  18. Evolutionary Tradeoffs between Economy and Effectiveness in Biological Homeostasis Systems

    PubMed Central

    Szekely, Pablo; Sheftel, Hila; Mayo, Avi; Alon, Uri

    2013-01-01

    Biological regulatory systems face a fundamental tradeoff: they must be effective but at the same time also economical. For example, regulatory systems that are designed to repair damage must be effective in reducing damage, but economical in not making too many repair proteins because making excessive proteins carries a fitness cost to the cell, called protein burden. In order to see how biological systems compromise between the two tasks of effectiveness and economy, we applied an approach from economics and engineering called Pareto optimality. This approach allows calculating the best-compromise systems that optimally combine the two tasks. We used a simple and general model for regulation, known as integral feedback, and showed that best-compromise systems have particular combinations of biochemical parameters that control the response rate and basal level. We find that the optimal systems fall on a curve in parameter space. Due to this feature, even if one is able to measure only a small fraction of the system's parameters, one can infer the rest. We applied this approach to estimate parameters in three biological systems: response to heat shock and response to DNA damage in bacteria, and calcium homeostasis in mammals. PMID:23950698

  19. Evolutionary tradeoffs between economy and effectiveness in biological homeostasis systems.

    PubMed

    Szekely, Pablo; Sheftel, Hila; Mayo, Avi; Alon, Uri

    2013-01-01

    Biological regulatory systems face a fundamental tradeoff: they must be effective but at the same time also economical. For example, regulatory systems that are designed to repair damage must be effective in reducing damage, but economical in not making too many repair proteins because making excessive proteins carries a fitness cost to the cell, called protein burden. In order to see how biological systems compromise between the two tasks of effectiveness and economy, we applied an approach from economics and engineering called Pareto optimality. This approach allows calculating the best-compromise systems that optimally combine the two tasks. We used a simple and general model for regulation, known as integral feedback, and showed that best-compromise systems have particular combinations of biochemical parameters that control the response rate and basal level. We find that the optimal systems fall on a curve in parameter space. Due to this feature, even if one is able to measure only a small fraction of the system's parameters, one can infer the rest. We applied this approach to estimate parameters in three biological systems: response to heat shock and response to DNA damage in bacteria, and calcium homeostasis in mammals.

  20. Cutting and coagulation during intraoral soft tissue surgery using Er: YAG laser.

    PubMed

    Onisor, I; Pecie, R; Chaskelis, I; Krejci, I

    2013-06-01

    To find the optimal techniques and parameters that enables Er:YAG laser to be used successfully for small intraoral soft tissue interventions, in respect to its cutting and coagulation abilities. In vitro pre-tests: 4 different Er:YAG laser units and one CO2 unit as the control were used for incision and coagulation on porcine lower jaws and optimal parameters were established for each type of intervention and each laser unit: energy, frequency, type, pulse duration and distance. 3 different types of intervention using Er:YAG units are presented: crown lengthening, gingivoplasty and maxillary labial frenectomy with parameters found in the in vitro pre-tests. The results showed a great decrease of the EMG activity of masseter and anterior temporalis muscles. Moreover, the height and width of the chewing cycles in the frontal plane increased after therapy. Er:YAG is able to provide good cutting and coagulation effects on soft tissues. Specific parameters have to be defined for each laser unit in order to obtain the desired effect. Reduced or absent water spray, defocused light beam, local anaesthesia and the most effective use of long pulses are methods to obtain optimal coagulation and bleeding control.

  1. Impedance learning for robotic contact tasks using natural actor-critic algorithm.

    PubMed

    Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul

    2010-04-01

    Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.

  2. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  3. On-rate based optimization of structure-kinetic relationship--surfing the kinetic map.

    PubMed

    Schoop, Andreas; Dey, Fabian

    2015-10-01

    In the lead discovery process residence time has become an important parameter for the identification and characterization of the most efficacious compounds in vivo. To enable the success of compound optimization by medicinal chemistry toward a desired residence time the understanding of structure-kinetic relationship (SKR) is essential. This article reviews various approaches to monitor SKR and suggests using the on-rate as the key monitoring parameter. The literature is reviewed and examples of compound series with low variability as well as with significant changes in on-rates are highlighted. Furthermore, findings of kinetic on-rate changes are presented and potential underlying rationales are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Statistical simplex approach to primary and secondary color correction in thick lens assemblies

    NASA Astrophysics Data System (ADS)

    Ament, Shelby D. V.; Pfisterer, Richard

    2017-11-01

    A glass selection optimization algorithm is developed for primary and secondary color correction in thick lens systems. The approach is based on the downhill simplex method, and requires manipulation of the surface color equations to obtain a single glass-dependent parameter for each lens element. Linear correlation is used to relate this parameter to all other glass-dependent variables. The algorithm provides a statistical distribution of Abbe numbers for each element in the system. Examples of several lenses, from 2-element to 6-element systems, are performed to verify this approach. The optimization algorithm proposed is capable of finding glass solutions with high color correction without requiring an exhaustive search of the glass catalog.

  5. Parametric Optimization of Wire Electrical Discharge Machining of Powder Metallurgical Cold Worked Tool Steel using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sudhakara, Dara; Prasanthi, Guvvala

    2017-04-01

    Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.

  6. Prediction of chemical biodegradability using support vector classifier optimized with differential evolution.

    PubMed

    Cao, Qi; Leung, K M

    2014-09-22

    Reliable computer models for the prediction of chemical biodegradability from molecular descriptors and fingerprints are very important for making health and environmental decisions. Coupling of the differential evolution (DE) algorithm with the support vector classifier (SVC) in order to optimize the main parameters of the classifier resulted in an improved classifier called the DE-SVC, which is introduced in this paper for use in chemical biodegradability studies. The DE-SVC was applied to predict the biodegradation of chemicals on the basis of extensive sample data sets and known structural features of molecules. Our optimization experiments showed that DE can efficiently find the proper parameters of the SVC. The resulting classifier possesses strong robustness and reliability compared with grid search, genetic algorithm, and particle swarm optimization methods. The classification experiments conducted here showed that the DE-SVC exhibits better classification performance than models previously used for such studies. It is a more effective and efficient prediction model for chemical biodegradability.

  7. Constraint Optimization Problem For The Cutting Of A Cobalt Chrome Refractory Material

    NASA Astrophysics Data System (ADS)

    Lebaal, Nadhir; Schlegel, Daniel; Folea, Milena

    2011-05-01

    This paper shows a complete approach to solve a given problem, from the experimentation to the optimization of different cutting parameters. In response to an industrial problem of slotting FSX 414, a Cobalt-based refractory material, we have implemented a design of experiment to determine the most influent parameters on the tool life, the surface roughness and the cutting forces. After theses trials, an optimization approach has been implemented to find the lowest manufacturing cost while respecting the roughness constraints and cutting force limitation constraints. The optimization approach is based on the Response Surface Method (RSM) using the Sequential Quadratic programming algorithm (SQP) for a constrained problem. To avoid a local optimum and to obtain an accurate solution at low cost, an efficient strategy, which allows improving the RSM accuracy in the vicinity of the global optimum, is presented. With these models and these trials, we could apply and compare our optimization methods in order to get the lowest cost for the best quality, i.e. a satisfying surface roughness and limited cutting forces.

  8. An improved parent-centric mutation with normalized neighborhoods for inducing niching behavior in differential evolution.

    PubMed

    Biswas, Subhodip; Kundu, Souvik; Das, Swagatam

    2014-10-01

    In real life, we often need to find multiple optimally sustainable solutions of an optimization problem. Evolutionary multimodal optimization algorithms can be very helpful in such cases. They detect and maintain multiple optimal solutions during the run by incorporating specialized niching operations in their actual framework. Differential evolution (DE) is a powerful evolutionary algorithm (EA) well-known for its ability and efficiency as a single peak global optimizer for continuous spaces. This article suggests a niching scheme integrated with DE for achieving a stable and efficient niching behavior by combining the newly proposed parent-centric mutation operator with synchronous crowding replacement rule. The proposed approach is designed by considering the difficulties associated with the problem dependent niching parameters (like niche radius) and does not make use of such control parameter. The mutation operator helps to maintain the population diversity at an optimum level by using well-defined local neighborhoods. Based on a comparative study involving 13 well-known state-of-the-art niching EAs tested on an extensive collection of benchmarks, we observe a consistent statistical superiority enjoyed by our proposed niching algorithm.

  9. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  10. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms

    PubMed Central

    Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris

    2016-01-01

    In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177

  11. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  12. Multivariable optimization of an auto-thermal ammonia synthesis reactor using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Tien-Dung, Vu; Kim-Trung, Nguyen

    2017-09-01

    The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone. The optimal problem requires the maximization of a multivariable objective function which subjects to a number of equality constraints involving the solution of coupled differential equations and also inequality constraints. The solution of an optimization problem can be found through, among others, deterministic or stochastic approaches. The stochastic methods, such as evolutionary algorithm (EA), which is based on natural phenomenon, can overcome the drawbacks such as the requirement of the derivatives of the objective function and/or constraints, or being not efficient in non-differentiable or discontinuous problems. Genetic algorithm (GA) which is a class of EA, exceptionally simple, robust at numerical optimization and is more likely to find a true global optimum. In this study, the genetic algorithm is employed to find the optimum profit of the process. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results showed that the presented numerical method could be applied to model the ammonia synthesis reactor. The optimum economic profit obtained from this study are also compared to the results from the literature. It suggests that the process should be operated at higher temperature of feed gas in catalyst zone and the reactor length is slightly longer.

  13. Assessing predation risk: optimal behaviour and rules of thumb.

    PubMed

    Welton, Nicky J; McNamara, John M; Houston, Alasdair I

    2003-12-01

    We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.

  14. Annealed Importance Sampling for Neural Mass Models

    PubMed Central

    Penny, Will; Sengupta, Biswa

    2016-01-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  15. An Integrated Approach to Parameter Learning in Infinite-Dimensional Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Zachary M.; Wendelberger, Joanne Roth

    The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less

  16. IOTA: integration optimization, triage and analysis tool for the processing of XFEL diffraction images.

    PubMed

    Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T

    2016-06-01

    Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.

  17. Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel

    2011-09-01

    The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.

  18. Experimental investigation and optimization of welding process parameters for various steel grades using NN tool and Taguchi method

    NASA Astrophysics Data System (ADS)

    Soni, Sourabh Kumar; Thomas, Benedict

    2018-04-01

    The term "weldability" has been used to describe a wide variety of characteristics when a material is subjected to welding. In our analysis we perform experimental investigation to estimate the tensile strength of welded joint strength and then optimization of welding process parameters by using taguchi method and Artificial Neural Network (ANN) tool in MINITAB and MATLAB software respectively. The study reveals the influence on weldability of steel by varying composition of steel by mechanical characterization. At first we prepare the samples of different grades of steel (EN8, EN 19, EN 24). The samples were welded together by metal inert gas welding process and then tensile testing on Universal testing machine (UTM) was conducted for the same to evaluate the tensile strength of the welded steel specimens. Further comparative study was performed to find the effects of welding parameter on quality of weld strength by employing Taguchi method and Neural Network tool. Finally we concluded that taguchi method and Neural Network Tool is much efficient technique for optimization.

  19. Serial robot for the trajectory optimization and error compensation of TMT mask exchange system

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhang, Feifan; Zhou, Zengxiang; Zhai, Chao

    2015-10-01

    Mask exchange system is the main part of Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). According to the conception of the TMT mask exchange system, the pre-design was introduced in the paper which was based on IRB 140 robot. The stiffness model of IRB 140 in SolidWorks was analyzed under different gravity vectors for further error compensation. In order to find the right location and path planning, the robot and the mask cassette model was imported into MOBIE model to perform different schemes simulation. And obtained the initial installation position and routing. Based on these initial parameters, IRB 140 robot was operated to simulate the path and estimate the mask exchange time. Meanwhile, MATLAB and ADAMS software were used to perform simulation analysis and optimize the route to acquire the kinematics parameters and compare with the experiment results. After simulation and experimental research mentioned in the paper, the theoretical reference was acquired which could high efficient improve the structure of the mask exchange system parameters optimization of the path and precision of the robot position.

  20. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  1. Optimization of Wireless Power Transfer Systems Enhanced by Passive Elements and Metasurfaces

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-10-01

    This paper presents a rigorous optimization technique for wireless power transfer (WPT) systems enhanced by passive elements, ranging from simple reflectors and intermedi- ate relays all the way to general electromagnetic guiding and focusing structures, such as metasurfaces and metamaterials. At its core is a convex semidefinite relaxation formulation of the otherwise nonconvex optimization problem, of which tightness and optimality can be confirmed by a simple test of its solutions. The resulting method is rigorous, versatile, and general -- it does not rely on any assumptions. As shown in various examples, it is able to efficiently and reliably optimize such WPT systems in order to find their physical limitations on performance, optimal operating parameters and inspect their working principles, even for a large number of active transmitters and passive elements.

  2. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  3. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  4. Aqueous enzymatic extraction of Moringa oleifera oil.

    PubMed

    Mat Yusoff, Masni; Gordon, Michael H; Ezeh, Onyinye; Niranjan, Keshavan

    2016-11-15

    This paper reports on the extraction of Moringa oleifera (MO) oil by using aqueous enzymatic extraction (AEE) method. The effect of different process parameters on the oil recovery was discovered by using statistical optimization, besides the effect of selected parameters on the formation of its oil-in-water cream emulsions. Within the pre-determined ranges, the use of pH 4.5, moisture/kernel ratio of 8:1 (w/w), and 300stroke/min shaking speed at 40°C for 1h incubation time resulted in highest oil recovery of approximately 70% (goil/g solvent-extracted oil). These optimized parameters also result in a very thin emulsion layer, indicating minute amount of emulsion formed. Zero oil recovery with thick emulsion were observed when the used aqueous phase was re-utilized for another AEE process. The findings suggest that the critical selection of AEE parameters is key to high oil recovery with minimum emulsion formation thereby lowering the load on the de-emulsification step. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Numerical study of entropy generation and melting heat transfer on MHD generalised non-Newtonian fluid (GNF): Application to optimal energy

    NASA Astrophysics Data System (ADS)

    Iqbal, Z.; Mehmood, Zaffar; Ahmad, Bilal

    2018-05-01

    This paper concerns an application to optimal energy by incorporating thermal equilibrium on MHD-generalised non-Newtonian fluid model with melting heat effect. Highly nonlinear system of partial differential equations is simplified to a nonlinear system using boundary layer approach and similarity transformations. Numerical solutions of velocity and temperature profile are obtained by using shooting method. The contribution of entropy generation is appraised on thermal and fluid velocities. Physical features of relevant parameters have been discussed by plotting graphs and tables. Some noteworthy findings are: Prandtl number, power law index and Weissenberg number contribute in lowering mass boundary layer thickness and entropy effect and enlarging thermal boundary layer thickness. However, an increasing mass boundary layer effect is only due to melting heat parameter. Moreover, thermal boundary layers have same trend for all parameters, i.e., temperature enhances with increase in values of significant parameters. Similarly, Hartman and Weissenberg numbers enhance Bejan number.

  6. Optimal Artificial Boundary Condition Configurations for Sensitivity-Based Model Updating and Damage Detection

    DTIC Science & Technology

    2010-09-01

    matrix is used in many methods, like Jacobi or Gauss Seidel , for solving linear systems. Also, no partial pivoting is necessary for a strictly column...problems that arise during the procedure, which in general, converges to the solving of a linear system. The most common issue with the solution is the... iterative procedure to find an appropriate subset of parameters that produce an optimal solution commonly known as forward selection. Then, the

  7. Efficient receiver tuning using differential evolution strategies

    NASA Astrophysics Data System (ADS)

    Wheeler, Caleb H.; Toland, Trevor G.

    2016-08-01

    Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.

  8. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bomble, Yannick J; St. John, Peter C; Crowley, Michael F

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  9. Designing robust control laws using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1994-01-01

    The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.

  10. Information Filtering via a Scaling-Based Function

    PubMed Central

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem. PMID:23696829

  11. Optimization of HTS superconducting magnetic energy storage magnet volume

    NASA Astrophysics Data System (ADS)

    Korpela, Aki; Lehtonen, Jorma; Mikkonen, Risto

    2003-08-01

    Nonlinear optimization problems in the field of electromagnetics have been successfully solved by means of sequential quadratic programming (SQP) and the finite element method (FEM). For example, the combination of SQP and FEM has been proven to be an efficient tool in the optimization of low temperature superconductors (LTS) superconducting magnetic energy storage (SMES) magnets. The procedure can also be applied for the optimization of HTS magnets. However, due to a strongly anisotropic material and a slanted electric field, current density characteristic high temperature superconductors HTS optimization is quite different from that of the LTS. In this paper the volumes of solenoidal conduction-cooled Bi-2223/Ag SMES magnets have been optimized at the operation temperature of 20 K. In addition to the electromagnetic constraints the stress caused by the tape bending has also been taken into account. Several optimization runs with different initial geometries were performed in order to find the best possible solution for a certain energy requirement. The optimization constraints describe the steady-state operation, thus the presented coil geometries are designed for slow ramping rates. Different energy requirements were investigated in order to find the energy dependence of the design parameters of optimized solenoidal HTS coils. According to the results, these dependences can be described with polynomial expressions.

  12. Finite horizon EOQ model for non-instantaneous deteriorating items with price and advertisement dependent demand and partial backlogging under inflation

    NASA Astrophysics Data System (ADS)

    Palanivel, M.; Uthayakumar, R.

    2015-07-01

    This paper deals with an economic order quantity (EOQ) model for non-instantaneous deteriorating items with price and advertisement dependent demand pattern under the effect of inflation and time value of money over a finite planning horizon. In this model, shortages are allowed and partially backlogged. The backlogging rate is dependent on the waiting time for the next replenishment. This paper aids the retailer in minimising the total inventory cost by finding the optimal interval and the optimal order quantity. An algorithm is designed to find the optimum solution of the proposed model. Numerical examples are given to demonstrate the results. Also, the effect of changes in the different parameters on the optimal total cost is graphically presented and the implications are discussed in detail.

  13. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    PubMed

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  14. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  15. Application of modified Rosenbrock's method for optimization of nutrient media used in microorganism culturing.

    PubMed

    Votruba, J; Pilát, P; Prokop, A

    1975-12-01

    The Rosenbrock's procedure has been modified for optimization of nutrient medium composition and has been found to be less tedious than the Box-Wilson method, especially for larger numbers of optimized parameters. Its merits are particularly obvious with multiparameter optimization where the gradient method, so far the only one employed in microbiology from a variety of optimization methods (e.g., refs, 9 and 10), becomes impractical because of the excessive number of experiments required. The method suggested is also more stable during optimization than the gradient methods which are very sensitive to the selection of steps in the direction of the gradient and may thus easily shoot out of the optimized region. It is also anticipated that other direct search methods, particularly simplex design, may be easily adapted for optimization of medium composition. It is obvious that direct search methods may find an application in process improvement in antibiotic and related industries.

  16. Response Ant Colony Optimization of End Milling Surface Roughness

    PubMed Central

    Kadirgama, K.; Noor, M. M.; Abd Alla, Ahmed N.

    2010-01-01

    Metal cutting processes are important due to increased consumer demands for quality metal cutting related products (more precise tolerances and better product surface roughness) that has driven the metal cutting industry to continuously improve quality control of metal cutting processes. This paper presents optimum surface roughness by using milling mould aluminium alloys (AA6061-T6) with Response Ant Colony Optimization (RACO). The approach is based on Response Surface Method (RSM) and Ant Colony Optimization (ACO). The main objectives to find the optimized parameters and the most dominant variables (cutting speed, feedrate, axial depth and radial depth). The first order model indicates that the feedrate is the most significant factor affecting surface roughness. PMID:22294914

  17. Optimal control applied to a model for species augmentation.

    PubMed

    Bodine, Erin N; Gross, Louis J; Lenhart, Suzanne

    2008-10-01

    Species augmentation is a method of reducing species loss via augmenting declining or threatened populations with individuals from captive-bred or stable, wild populations. In this paper, we develop a differential equations model and optimal control formulation for a continuous time augmentation of a general declining population. We find a characterization for the optimal control and show numerical results for scenarios of different illustrative parameter sets. The numerical results provide considerably more detail about the exact dynamics of optimal augmentation than can be readily intuited. The work and results presented in this paper are a first step toward building a general theory of population augmentation, which accounts for the complexities inherent in many conservation biology applications.

  18. Optimization of contrast-enhanced spectral mammography depending on clinical indication

    PubMed Central

    Dromain, Clarisse; Canale, Sandra; Saab-Puong, Sylvie; Carton, Ann-Katherine; Muller, Serge; Fallenberg, Eva Maria

    2014-01-01

    Abstract. The objective is to optimize low-energy (LE) and high-energy (HE) exposure parameters of contrast-enhanced spectral mammography (CESM) examinations in four different clinical applications for which different levels of average glandular dose (AGD) and ratios between LE and total doses are required. The optimization was performed on a Senographe DS with a SenoBright® upgrade. Simulations were performed to find the optima by maximizing the contrast-to-noise ratio (CNR) on the recombined CESM image using different targeted doses and LE image quality. The linearity between iodine concentration and CNR as well as the minimal detectable iodine concentration was assessed. The image quality of the LE image was assessed on the CDMAM contrast-detail phantom. Experiments confirmed the optima found on simulation. The CNR was higher for each clinical indication than for SenoBright®, including the screening indication for which the total AGD was 22% lower. Minimal iodine concentrations detectable in the case of a 3-mm-diameter round tumor were 12.5% lower than those obtained for the same dose in the clinical routine. LE image quality satisfied EUREF acceptable limits for threshold contrast. This newly optimized set of acquisition parameters allows increased contrast detectability compared to parameters currently used without a significant loss in LE image quality. PMID:26158058

  19. Optimization of contrast-enhanced spectral mammography depending on clinical indication.

    PubMed

    Dromain, Clarisse; Canale, Sandra; Saab-Puong, Sylvie; Carton, Ann-Katherine; Muller, Serge; Fallenberg, Eva Maria

    2014-10-01

    The objective is to optimize low-energy (LE) and high-energy (HE) exposure parameters of contrast-enhanced spectral mammography (CESM) examinations in four different clinical applications for which different levels of average glandular dose (AGD) and ratios between LE and total doses are required. The optimization was performed on a Senographe DS with a SenoBright® upgrade. Simulations were performed to find the optima by maximizing the contrast-to-noise ratio (CNR) on the recombined CESM image using different targeted doses and LE image quality. The linearity between iodine concentration and CNR as well as the minimal detectable iodine concentration was assessed. The image quality of the LE image was assessed on the CDMAM contrast-detail phantom. Experiments confirmed the optima found on simulation. The CNR was higher for each clinical indication than for SenoBright®, including the screening indication for which the total AGD was 22% lower. Minimal iodine concentrations detectable in the case of a 3-mm-diameter round tumor were 12.5% lower than those obtained for the same dose in the clinical routine. LE image quality satisfied EUREF acceptable limits for threshold contrast. This newly optimized set of acquisition parameters allows increased contrast detectability compared to parameters currently used without a significant loss in LE image quality.

  20. Parallel tempering for the traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, Allon; Wang, Richard; Hyman, Jeffrey

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less

  1. Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations

    NASA Astrophysics Data System (ADS)

    Romanihin, S. M.; Tronin, I. V.

    2016-09-01

    We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.

  2. The quality estimation of exterior wall’s and window filling’s construction design

    NASA Astrophysics Data System (ADS)

    Saltykov, Ivan; Bovsunovskaya, Maria

    2017-10-01

    The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.

  3. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  4. Optimisation of thulium fibre laser parameters with generation of pulses by pump modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obronov, I V; Larin, S V; Sypin, V E

    2015-07-31

    The formation of relaxation pulses of a thulium fibre laser (λ = 1.9 μm) by modulating the power of a pump erbium fibre laser (λ = 1.55 μm) is studied. A theoretical model is developed to find the dependences of pulse duration and peak power on different cavity parameters. The optimal cavity parameters for achieving the minimal pulse duration are determined. The results are confirmed by experimental development of a laser emitting pulses with a duration shorter than 10 ns, a peak power of 1.8 kW and a repetition rate of 50 kHz. (control of radiation parameters)

  5. Strategie de commande optimale de la production electrique dans un site isole

    NASA Astrophysics Data System (ADS)

    Barris, Nicolas

    Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.

  6. Statistical mechanics of budget-constrained auctions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  7. Dynamic Portfolio Strategy Using Clustering Approach

    PubMed Central

    Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333

  8. Dynamic Portfolio Strategy Using Clustering Approach.

    PubMed

    Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.

  9. Rational Design of Glucose-Responsive Insulin Using Pharmacokinetic Modeling.

    PubMed

    Bakh, Naveed A; Bisker, Gili; Lee, Michael A; Gong, Xun; Strano, Michael S

    2017-11-01

    A glucose responsive insulin (GRI) is a therapeutic that modulates its potency, concentration, or dosing of insulin in relation to a patient's dynamic glucose concentration, thereby approximating aspects of a normally functioning pancreas. Current GRI design lacks a theoretical basis on which to base fundamental design parameters such as glucose reactivity, dissociation constant or potency, and in vivo efficacy. In this work, an approach to mathematically model the relevant parameter space for effective GRIs is induced, and design rules for linking GRI performance to therapeutic benefit are developed. Well-developed pharmacokinetic models of human glucose and insulin metabolism coupled to a kinetic model representation of a freely circulating GRI are used to determine the desired kinetic parameters and dosing for optimal glycemic control. The model examines a subcutaneous dose of GRI with kinetic parameters in an optimal range that results in successful glycemic control within prescribed constraints over a 24 h period. Additionally, it is demonstrated that the modeling approach can find GRI parameters that enable stable glucose levels that persist through a skipped meal. The results provide a framework for exploring the parameter space of GRIs, potentially without extensive, iterative in vivo animal testing. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  11. Evolutionary algorithm optimization of biological learning parameters in a biomimetic neuroprosthesis

    PubMed Central

    Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.

    2017-01-01

    Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477

  12. An Ant Colony Optimization algorithm for solving the fixed destination multi-depot multiple traveling salesman problem with non-random parameters

    NASA Astrophysics Data System (ADS)

    Ramadhani, T.; Hertono, G. F.; Handari, B. D.

    2017-07-01

    The Multiple Traveling Salesman Problem (MTSP) is the extension of the Traveling Salesman Problem (TSP) in which the shortest routes of m salesmen all of which start and finish in a single city (depot) will be determined. If there is more than one depot and salesmen start from and return to the same depot, then the problem is called Fixed Destination Multi-depot Multiple Traveling Salesman Problem (MMTSP). In this paper, MMTSP will be solved using the Ant Colony Optimization (ACO) algorithm. ACO is a metaheuristic optimization algorithm which is derived from the behavior of ants in finding the shortest route(s) from the anthill to a form of nourishment. In solving the MMTSP, the algorithm is observed with respect to different chosen cities as depots and non-randomly three parameters of MMTSP: m, K, L, those represents the number of salesmen, the fewest cities that must be visited by a salesman, and the most number of cities that can be visited by a salesman, respectively. The implementation is observed with four dataset from TSPLIB. The results show that the different chosen cities as depots and the three parameters of MMTSP, in which m is the most important parameter, affect the solution.

  13. Optimization the mechanical properties of coir-luffa cylindrica filled hybrid composites by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Krishnudu, D. Mohana; Sreeramulu, D.; Reddy, P. Venkateshwar

    2018-04-01

    In the current study mechanical properties of particles filled hybrid composites have been studied. The mechanical properties of the hybrid composite mainly depend on the proportions of the coir weight, Luffa weight and filler weight. RSM along with Taguchi method have been applied to find the optimized parameters of the hybrid composites. From the current study it was observed that the tensile strength of the composite mainly depends on the coir percent than the other two particles.

  14. Identification of pre-impact conditions of a cyclist involved in a vehicle-bicycle accident using an optimized MADYMO reconstruction combined with motion capture.

    PubMed

    Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu

    2018-05-01

    The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.

  15. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    PubMed

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (p<0.01) with optimized parameters than with the default ones. WaveClus was the most accurate spike sorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (p<0.01) than WaveClus for signals with a noise level in the range 0.15-0.30. KlustaKwik achieved similar scores to WaveClus for signals with low noise level 0.00-0.15 and was worse otherwise. In conclusion, none of the three compared algorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Active control of combustion instabilities

    NASA Astrophysics Data System (ADS)

    Al-Masoud, Nidal A.

    A theoretical analysis of active control of combustion thermo-acoustic instabilities is developed in this dissertation. The theoretical combustion model is based on the dynamics of a two-phase flow in a liquid-fueled propulsion system. The formulation is based on a generalized wave equation with pressure as the dependent variable, and accommodates all influences of combustion, mean flow, unsteady motions and control inputs. The governing partial differential equations are converted to an equivalent set of ordinary differential equations using Galerkin's method by expressing the unsteady pressure and velocity fields as functions of normal mode shapes of the chamber. This procedure yields a representation of the unsteady flow field as a system of coupled nonlinear oscillators that is used as a basis for controllers design. Major research attention is focused on the control of longitudinal oscillations with both linear and nonlinear processes being considered. Starting with a linear model using point actuators, the optimal locations of actuators and sensors are developed. The approach relies on the quantitative measures of the degree of controllability and component cost. These criterion are arrived at by considering the energies of the system's inputs and outputs. The optimality criteria for sensor and actuator locations provide a balance between the importance of the lower order (controlled) and the higher (residual) order modes. To address the issue of uncertainties in system's parameter, the minimax principles based controller is used. The minimax corresponds to finding the best controller for the worst parameter deviation. In other words, choosing controller parameters to minimize, and parameter deviation to maximize some quadratic performance metric. Using the minimax-based controller, a remarkable improvement in the control system's ability to handle parameter uncertainties is achieved when compared to the robustness of the regular control schemes such as LQR and LQG. Since the observed instabilities are harmonic, the concept of "harmonic input" is successfully implemented using a parametric controller to eliminate the thermo-acoustic instability. This control scheme relies on the determination of a phase-shift to maximize the energy dissipation and a controller gain to assure stability and minimize a pre-specified performance index. The closed loop control law design is based on finding an optimal phase angle such that the heat release produced by secondary oscillatory fuel injection is out of phase with the mode's pressure oscillations, thus maximizing energy dissipation, and on finding the limits on the controller gain that ensures system stability. The optimal gains are determined using ITA, ISE, ITAE performance indices. Simulations show successful implementation of the proposed technique.

  17. Optimal design of an alignment-free two-DOF rehabilitation robot for the shoulder complex.

    PubMed

    Galinski, Daniel; Sapin, Julien; Dehez, Bruno

    2013-06-01

    This paper presents the optimal design of an alignment-free exoskeleton for the rehabilitation of the shoulder complex. This robot structure is constituted of two actuated joints and is linked to the arm through passive degrees of freedom (DOFs) to drive the flexion-extension and abduction-adduction movements of the upper arm. The optimal design of this structure is performed through two steps. The first step is a multi-objective optimization process aiming to find the best parameters characterizing the robot and its position relative to the patient. The second step is a comparison process aiming to select the best solution from the optimization results on the basis of several criteria related to practical considerations. The optimal design process leads to a solution outperforming an existing solution on aspects as kinematics or ergonomics while being more simple.

  18. Computer Program for Analysis, Design and Optimization of Propulsion, Dynamics, and Kinematics of Multistage Rockets

    NASA Astrophysics Data System (ADS)

    Lali, Mehdi

    2009-03-01

    A comprehensive computer program is designed in MATLAB to analyze, design and optimize the propulsion, dynamics, thermodynamics, and kinematics of any serial multi-staging rocket for a set of given data. The program is quite user-friendly. It comprises two main sections: "analysis and design" and "optimization." Each section has a GUI (Graphical User Interface) in which the rocket's data are entered by the user and by which the program is run. The first section analyzes the performance of the rocket that is previously devised by the user. Numerous plots and subplots are provided to display the performance of the rocket. The second section of the program finds the "optimum trajectory" via billions of iterations and computations which are done through sophisticated algorithms using numerical methods and incremental integrations. Innovative techniques are applied to calculate the optimal parameters for the engine and designing the "optimal pitch program." This computer program is stand-alone in such a way that it calculates almost every design parameter in regards to rocket propulsion and dynamics. It is meant to be used for actual launch operations as well as educational and research purposes.

  19. Identification of groundwater flow parameters using reciprocal data from hydraulic interference tests

    NASA Astrophysics Data System (ADS)

    Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto

    2016-08-01

    We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.

  20. CR softcopy display presets based on optimum visualization of specific findings

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.

    1999-07-01

    The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  1. Sb2Te3 and Its Superlattices: Optimization by Statistical Design.

    PubMed

    Behera, Jitendra K; Zhou, Xilin; Ranjan, Alok; Simpson, Robert E

    2018-05-02

    The objective of this work is to demonstrate the usefulness of fractional factorial design for optimizing the crystal quality of chalcogenide van der Waals (vdW) crystals. We statistically analyze the growth parameters of highly c axis oriented Sb 2 Te 3 crystals and Sb 2 Te 3 -GeTe phase change vdW heterostructured superlattices. The statistical significance of the growth parameters of temperature, pressure, power, buffer materials, and buffer layer thickness was found by fractional factorial design and response surface analysis. Temperature, pressure, power, and their second-order interactions are the major factors that significantly influence the quality of the crystals. Additionally, using tungsten rather than molybdenum as a buffer layer significantly enhances the crystal quality. Fractional factorial design minimizes the number of experiments that are necessary to find the optimal growth conditions, resulting in an order of magnitude improvement in the crystal quality. We highlight that statistical design of experiment methods, which is more commonly used in product design, should be considered more broadly by those designing and optimizing materials.

  2. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    NASA Astrophysics Data System (ADS)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  3. Optimization of SDS exposure on preservation of ECM characteristics in whole organ decellularization of rat kidneys.

    PubMed

    He, M; Callanan, A; Lagaras, K; Steele, J A M; Stevens, M M

    2017-08-01

    Renal transplantation is well established as the optimal form of renal replacement therapy but is restricted by the limited pool of organs available for transplantation. The whole organ decellularisation approach is leading the way for a regenerative medicine solution towards bioengineered organ replacements. However, systematic preoptimization of both decellularization and recellularization parameters is essential prior to any potential clinical application and should be the next stage in the evolution of whole organ decellularization as a potential strategy for bioengineered organ replacements. Here we have systematically assessed two fundamental parameters (concentration and duration of perfusion) with regards to the effects of differing exposure to the most commonly used single decellularizing agent (sodium dodecyl sulphate/SDS) in the perfusion decellularization process for whole rat kidney ECM bioscaffolds, with findings showing improved preservation of both structural and functional components of the whole kidney ECM bioscaffold. Whole kidney bioscaffolds based on our enhanced protocol were successfully recellularized with rat primary renal cells and mesenchymal stromal cells. These findings should be widely applicable to decellularized whole organ bioscaffolds and their optimization in the development of regenerated organ replacements for transplantation. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 1352-1360, 2017. © 2016 Wiley Periodicals, Inc.

  4. Quantum metrology of spatial deformation using arrays of classical and quantum light emitters

    NASA Astrophysics Data System (ADS)

    Sidhu, Jasminder S.; Kok, Pieter

    2017-06-01

    We introduce spatial deformations to an array of light sources and study how the estimation precision of the interspacing distance d changes with the sources of light used. The quantum Fisher information (QFI) is used as the figure of merit in this work to quantify the amount of information we have on the estimation parameter. We derive the generator of translations G ̂ in d due to an arbitrary homogeneous deformation applied to the array. We show how the variance of the generator can be used to easily consider how different deformations and light sources can effect the estimation precision. The single-parameter estimation problem is applied to the array, and we report on the optimal state that maximizes the QFI for d . Contrary to what may have been expected, the higher average mode occupancies of the classical states performs better in estimating d when compared with single photon emitters (SPEs). The optimal entangled state is constructed from the eigenvectors of the generator and found to outperform all these states. We also find the existence of multiple optimal estimators for the measurement of d . Our results find applications in evaluating stresses and strains, fracture prevention in materials expressing great sensitivities to deformations, and selecting frequency distinguished quantum sources from an array of reference sources.

  5. Optimization of the Number and Location of Tsunami Stations in a Tsunami Warning System

    NASA Astrophysics Data System (ADS)

    An, C.; Liu, P. L. F.; Pritchard, M. E.

    2014-12-01

    Optimizing the number and location of tsunami stations in designing a tsunami warning system is an important and practical problem. It is always desirable to maximize the capability of the data obtained from the stations for constraining the earthquake source parameters, and to minimize the number of stations at the same time. During the 2011 Tohoku tsunami event, 28 coastal gauges and DART buoys in the near-field recorded tsunami waves, providing an opportunity for assessing the effectiveness of those stations in identifying the earthquake source parameters. Assuming a single-plane fault geometry, inversions of tsunami data from combinations of various number (1~28) of stations and locations are conducted and evaluated their effectiveness according to the residues of the inverse method. Results show that the optimized locations of stations depend on the number of stations used. If the stations are optimally located, 2~4 stations are sufficient to constrain the source parameters. Regarding the optimized location, stations must be uniformly spread in all directions, which is not surprising. It is also found that stations within the source region generally give worse constraint of earthquake source than stations farther from source, which is due to the exaggeration of model error in matching large amplitude waves at near-source stations. Quantitative discussions on these findings will be given in the presentation. Applying similar analysis to the Manila Trench based on artificial scenarios of earthquakes and tsunamis, the optimal location of tsunami stations are obtained, which provides guidance of deploying a tsunami warning system in this region.

  6. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.

    PubMed

    Goto, Hayato

    2016-02-22

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  7. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    NASA Astrophysics Data System (ADS)

    Goto, Hayato

    2016-02-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  8. Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables

    NASA Astrophysics Data System (ADS)

    Bubnicki, Z.

    2006-06-01

    The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.

  9. PSO Algorithm for an Optimal Power Controller in a Microgrid

    NASA Astrophysics Data System (ADS)

    Al-Saedi, W.; Lachowicz, S.; Habibi, D.; Bass, O.

    2017-07-01

    This paper presents the Particle Swarm Optimization (PSO) algorithm to improve the quality of the power supply in a microgrid. This algorithm is proposed for a real-time selftuning method that used in a power controller for an inverter based Distributed Generation (DG) unit. In such system, the voltage and frequency are the main control objectives, particularly when the microgrid is islanded or during load change. In this work, the PSO algorithm is implemented to find the optimal controller parameters to satisfy the control objectives. The results show high performance of the applied PSO algorithm of regulating the microgrid voltage and frequency.

  10. Effect of Process Parameters on Catalytic Incineration of Solvent Emissions

    PubMed Central

    Ojala, Satu; Lassi, Ulla; Perämäki, Paavo; Keiski, Riitta L.

    2008-01-01

    Catalytic oxidation is a feasible and affordable technology for solvent emission abatement. However, finding optimal operation conditions is important, since they are strongly dependent on the application area of VOC incineration. This paper presents the results of the laboratory experiments concerning four most central parameters, that is, effects of concentration, gas hourly space velocity (GHSV), temperature, and moisture on the oxidation of n-butyl acetate. Both fresh and industrially aged commercial Pt/Al2O3 catalysts were tested to determine optimal process conditions and the significance order and level of selected parameters. The effects of these parameters were evaluated by computer-aided statistical experimental design. According to the results, GHSV was the most dominant parameter in the oxidation of n-butyl acetate. Decreasing GHSV and increasing temperature increased the conversion of n-butyl acetate. The interaction effect of GHSV and temperature was more significant than the effect of concentration. Both of these affected the reaction by increasing the conversion of n-butyl acetate. Moisture had only a minor decreasing effect on the conversion, but it also decreased slightly the formation of by products. Ageing did not change the significance order of the above-mentioned parameters, however, the effects of individual parameters increased slightly as a function of ageing. PMID:18584032

  11. Thermal and athermal three-dimensional swarms of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Nguyen, Nguyen H. P.; Jankowski, Eric; Glotzer, Sharon C.

    2012-07-01

    Swarms of self-propelled particles exhibit complex behavior that can arise from simple models, with large changes in swarm behavior resulting from small changes in model parameters. We investigate the steady-state swarms formed by self-propelled Morse particles in three dimensions using molecular dynamics simulations optimized for graphics processing units. We find a variety of swarms of different overall shape assemble spontaneously and that for certain Morse potential parameters at most two competing structures are observed. We report a rich “phase diagram” of athermal swarm structures observed across a broad range of interaction parameters. Unlike the structures formed in equilibrium self-assembly, we find that the probability of forming a self-propelled swarm can be biased by the choice of initial conditions. We investigate how thermal noise influences swarm formation and demonstrate ways it can be exploited to reconfigure one swarm into another. Our findings validate and extend previous observations of self-propelled Morse swarms and highlight open questions for predictive theories of nonequilibrium self-assembly.

  12. A study on the performance comparison of metaheuristic algorithms on the learning of neural networks

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2017-08-01

    The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.

  13. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  14. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  15. Fuzzy Sets in Dynamic Adaptation of Parameters of a Bee Colony Optimization for Controlling the Trajectory of an Autonomous Mobile Robot

    PubMed Central

    Amador-Angulo, Leticia; Mendoza, Olivia; Castro, Juan R.; Rodríguez-Díaz, Antonio; Melin, Patricia; Castillo, Oscar

    2016-01-01

    A hybrid approach composed by different types of fuzzy systems, such as the Type-1 Fuzzy Logic System (T1FLS), Interval Type-2 Fuzzy Logic System (IT2FLS) and Generalized Type-2 Fuzzy Logic System (GT2FLS) for the dynamic adaptation of the alpha and beta parameters of a Bee Colony Optimization (BCO) algorithm is presented. The objective of the work is to focus on the BCO technique to find the optimal distribution of the membership functions in the design of fuzzy controllers. We use BCO specifically for tuning membership functions of the fuzzy controller for trajectory stability in an autonomous mobile robot. We add two types of perturbations in the model for the Generalized Type-2 Fuzzy Logic System to better analyze its behavior under uncertainty and this shows better results when compared to the original BCO. We implemented various performance indices; ITAE, IAE, ISE, ITSE, RMSE and MSE to measure the performance of the controller. The experimental results show better performances using GT2FLS then by IT2FLS and T1FLS in the dynamic adaptation the parameters for the BCO algorithm. PMID:27618062

  16. Material and shape optimization for multi-layered vocal fold models using transient loadings.

    PubMed

    Schmidt, Bastian; Leugering, Günter; Stingl, Michael; Hüttner, Björn; Agaimy, Abbas; Döllinger, Michael

    2013-08-01

    Commonly applied models to study vocal fold vibrations in combination with air flow distributions are self-sustained physical models of the larynx consisting of artificial silicone vocal folds. Choosing appropriate mechanical parameters and layer geometries for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In earlier work by Schmidt et al. [J. Acoust. Soc. Am. 129, 2168-2180 (2011)], the authors presented an approach in which material parameters of a static numerical vocal fold model were optimized to achieve an agreement of the displacement field with data retrieved from hemilarynx experiments. This method is now generalized to a fully transient setting. Moreover in addition to the material parameters, the extended approach is capable of finding optimized layer geometries. Depending on chosen material restriction, significant modifications of the reference geometry are predicted. The additional flexibility in the design space leads to a significantly more realistic deformation behavior. At the same time, the predicted biomechanical and geometrical results are still feasible for manufacturing physical vocal fold models consisting of several silicone layers. As a consequence, the proposed combined experimental and numerical method is suited to guide the construction of physical vocal fold models.

  17. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Optimization of plasma amplifiers

    DOE PAGES

    Sadler, James D.; Trines, Raoul M. G. M.; Tabak, Max; ...

    2017-05-24

    Here, plasma amplifiers offer a route to side-step limitations on chirped pulse amplification and generate laser pulses at the power frontier. They compress long pulses by transferring energy to a shorter pulse via the Raman or Brillouin instabilities. We present an extensive kinetic numerical study of the three-dimensional parameter space for the Raman case. Further particle-in-cell simulations find the optimal seed pulse parameters for experimentally relevant constraints. The high-efficiency self-similar behavior is observed only for seeds shorter than the linear Raman growth time. A test case similar to an upcoming experiment at the Laboratory for Laser Energetics is found tomore » maintain good transverse coherence and high-energy efficiency. Effective compression of a 10kJ, nanosecond-long driver pulse is also demonstrated in a 15-cm-long amplifier.« less

  19. Optimization of plasma amplifiers

    NASA Astrophysics Data System (ADS)

    Sadler, James D.; Trines, Raoul M. Â. G. Â. M.; Tabak, Max; Haberberger, Dan; Froula, Dustin H.; Davies, Andrew S.; Bucht, Sara; Silva, Luís O.; Alves, E. Paulo; Fiúza, Frederico; Ceurvorst, Luke; Ratan, Naren; Kasim, Muhammad F.; Bingham, Robert; Norreys, Peter A.

    2017-05-01

    Plasma amplifiers offer a route to side-step limitations on chirped pulse amplification and generate laser pulses at the power frontier. They compress long pulses by transferring energy to a shorter pulse via the Raman or Brillouin instabilities. We present an extensive kinetic numerical study of the three-dimensional parameter space for the Raman case. Further particle-in-cell simulations find the optimal seed pulse parameters for experimentally relevant constraints. The high-efficiency self-similar behavior is observed only for seeds shorter than the linear Raman growth time. A test case similar to an upcoming experiment at the Laboratory for Laser Energetics is found to maintain good transverse coherence and high-energy efficiency. Effective compression of a 10 kJ , nanosecond-long driver pulse is also demonstrated in a 15-cm-long amplifier.

  20. Optimization of the Design of Pre-Signal System Using Improved Cellular Automaton

    PubMed Central

    Li, Yan; Li, Ke; Tao, Siran; Chen, Kuanmin

    2014-01-01

    The pre-signal system can improve the efficiency of intersection approach under rational design. One of the main obstacles in optimizing the design of pre-signal system is that driving behaviors in the sorting area cannot be well evaluated. The NaSch model was modified by considering slow probability, turning-deceleration rules, and lane changing rules. It was calibrated with field observed data to explore the interactions among design parameters. The simulation results of the proposed model indicate that the length of sorting area, traffic demand, signal timing, and lane allocation are the most important influence factors. The recommendations of these design parameters are demonstrated. The findings of this paper can be foundations for the design of pre-signal system and show promising improvement in traffic mobility. PMID:25435871

  1. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  2. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  3. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  4. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  5. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  6. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  7. User's manual for the BNW-I optimization code for dry-cooled power plants. [AMCIRC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.

    1977-01-01

    This appendix provides a listing, called Program AMCIRC, of the BNW-1 optimization code for determining, for a particular size power plant, the optimum dry cooling tower design using ammonia flow in the heat exchanger tubes. The optimum design is determined by repeating the design of the cooling system over a range of design conditions in order to find the cooling system with the smallest incremental cost. This is accomplished by varying five parameters of the plant and cooling system over ranges of values. These parameters are varied systematically according to techniques that perform pattern and gradient searches. The dry coolingmore » system optimized by program AMCIRC is composed of a condenser/reboiler (condensation of steam and boiling of ammonia), piping system (transports ammonia vapor out and ammonia liquid from the dry cooling towers), and circular tower system (vertical one-pass heat exchangers situated in circular configurations with cocurrent ammonia flow in the tubes of the heat exchanger). (LCL)« less

  8. Nuclear Electric Vehicle Optimization Toolset (NEVOT)

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Kos, Larry D.; Qualls, A. Lou; Greene, Sherrell

    2004-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major nuclear electric propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a genetic algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be considered through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  9. Development of cubic Bezier curve and curve-plane intersection method for parametric submarine hull form design to optimize hull resistance using CFD

    NASA Astrophysics Data System (ADS)

    Chrismianto, Deddy; Zakki, Ahmad Fauzan; Arswendo, Berlian; Kim, Dong Joon

    2015-12-01

    Optimization analysis and computational fluid dynamics (CFDs) have been applied simultaneously, in which a parametric model plays an important role in finding the optimal solution. However, it is difficult to create a parametric model for a complex shape with irregular curves, such as a submarine hull form. In this study, the cubic Bezier curve and curve-plane intersection method are used to generate a solid model of a parametric submarine hull form taking three input parameters into account: nose radius, tail radius, and length-height hull ratio ( L/ H). Application program interface (API) scripting is also used to write code in the ANSYS design modeler. The results show that the submarine shape can be generated with some variation of the input parameters. An example is given that shows how the proposed method can be applied successfully to a hull resistance optimization case. The parametric design of the middle submarine type was chosen to be modified. First, the original submarine model was analyzed, in advance, using CFD. Then, using the response surface graph, some candidate optimal designs with a minimum hull resistance coefficient were obtained. Further, the optimization method in goal-driven optimization (GDO) was implemented to find the submarine hull form with the minimum hull resistance coefficient ( C t ). The minimum C t was obtained. The calculated difference in C t values between the initial submarine and the optimum submarine is around 0.26%, with the C t of the initial submarine and the optimum submarine being 0.001 508 26 and 0.001 504 29, respectively. The results show that the optimum submarine hull form shows a higher nose radius ( r n ) and higher L/ H than those of the initial submarine shape, while the radius of the tail ( r t ) is smaller than that of the initial shape.

  10. How do ensembles occupy space?

    NASA Astrophysics Data System (ADS)

    Daffertshofer, A.

    2008-04-01

    To find an answer to the title question, an attractiveness function between agents and locations is introduced yielding a phenomenological but generic model for the search for optimal distributions of agents over space. Agents can be seen as, e.g., members of biological populations like colonies of bacteria, swarms, and so on. The global attractiveness between agents and locations is maximized causing (self-propelled) `motion' of agents and, eventually, distinct distributions of agents over space. At the same token spontaneous changes or `decisions' are realized via competitions between agents as well as between locations. Hence, the model's solutions can be considered a sequence of decisions of agents during their search for a proper location. Depending on initial conditions both optimal as well as suboptimal configurations can be reached. For the latter early decision-making are important for avoiding possible conflicts: if the proper moment is missed, then only a few agents can find an optimal solution. Indeed, there is a delicate interplay between the values of the attractiveness function and the constraints as can be expressed by distinct terms of a potential function containing different Lagrange parameters. The model should be viewed as a top-down approach as it describes the dynamics of order parameters, i.e. macroscopic variables that reflect affiliations between agents and locations. The dynamics, however, is modified via so-called cost functions that are interpreted in terms of affinity levels. This interpretation can be seen as an original step towards an understanding of the dynamics at the underlying microscopic level. When focusing on the agent, one may say that the dynamics of an order parameter shows the evolution of an agent's intrinsic `map' for solving the problem of space occupation. Importantly, the dynamics does not necessarily distinguish between evolving (or moving) agents and evolving (or moving) locations though agents are more likely to be actors than the locations. Put differently, an order parameter describes an internal map which is linked to the expectation of an agent to find a certain location. Owing to the dynamical representation, we can therefore follow up the change of these maps over time leading from uncertainty to certainty.

  11. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    PubMed

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  12. Optimization of tribological performance of SiC embedded composite coating via Taguchi analysis approach

    NASA Astrophysics Data System (ADS)

    Maleque, M. A.; Bello, K. A.; Adebisi, A. A.; Akma, N.

    2017-03-01

    Tungsten inert gas (TIG) torch is one of the most recently used heat source for surface modification of engineering parts, giving similar results to the more expensive high power laser technique. In this study, ceramic-based embedded composite coating has been produced by precoated silicon carbide (SiC) powders on the AISI 4340 low alloy steel substrate using TIG welding torch process. A design of experiment based on Taguchi approach has been adopted to optimize the TIG cladding process parameters. The L9 orthogonal array and the signal-to-noise was used to study the effect of TIG welding parameters such as arc current, travelling speed, welding voltage and argon flow rate on tribological response behaviour (wear rate, surface roughness and wear track width). The objective of the study was to identify optimal design parameter that significantly minimizes each of the surface quality characteristics. The analysis of the experimental results revealed that the argon flow rate was found to be the most influential factor contributing to the minimum wear and surface roughness of the modified coating surface. On the other hand, the key factor in reducing wear scar is the welding voltage. Finally, a convenient and economical Taguchi approach used in this study was efficient to find out optimal factor settings for obtaining minimum wear rate, wear scar and surface roughness responses in TIG-coated surfaces.

  13. Model-Based Collaborative Filtering Analysis of Student Response Data: Machine-Learning Item Response Theory

    ERIC Educational Resources Information Center

    Bergner, Yoav; Droschler, Stefan; Kortemeyer, Gerd; Rayyan, Saif; Seaton, Daniel; Pritchard, David E.

    2012-01-01

    We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a…

  14. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  15. Factors influencing preclinical in vivo evaluation of mumps vaccine strain immunogenicity

    PubMed Central

    Halassy, B; Kurtović, T; Brgles, M; Lang Balija, M; Forčić, D

    2015-01-01

    Immunogenicity testing in animals is a necessary preclinical assay for demonstration of vaccine efficacy the results of which are often the basis for the decision whether to proceed or withdraw the further development of the novel vaccine candidate. However, in vivo assays are rarely, if at all, optimized and validated. Here we clearly demonstrate the importance of in vivo assay (mumps virus immunogenicity testing in guinea pigs) optimization for gaining reliable results and the suitability of Fractional factorial design of experiments (DoE) for such a purpose. By the use of DoE with resolution IV (2IV(4-1)) we clearly revealed that the parameters significantly increasing assay sensitivity were interval between animal immunizations followed by the body weight of experimental animals. The quantity (0 versus 2%) of the stabilizer (fetal bovine serum, FBS) in the sample was shown as non-influencing parameter in DoE setup. However, the separate experiment investigating only the FBS influence, and performed under other parameters optimally set, showed that FBS also influences the results of immunogenicity assay. Such finding indicated that (a) factors with strong influence on the measured outcome can hide the effects of parameters with modest/low influence and (b) the matrix of mumps virus samples to be compared for immunogenicity must be identical for reliable virus immunogenicity comparison. Finally the 3 mumps vaccine strains widely used for decades in the licensed vaccines were for the first time compared in an animal model, and results obtained were in line with their reported immunogenicity in human population supporting the predictive power of the optimized in vivo assay. PMID:26376015

  16. Factors influencing preclinical in vivo evaluation of mumps vaccine strain immunogenicity.

    PubMed

    Halassy, B; Kurtović, T; Brgles, M; Lang Balija, M; Forčić, D

    2015-01-01

    Immunogenicity testing in animals is a necessary preclinical assay for demonstration of vaccine efficacy the results of which are often the basis for the decision whether to proceed or withdraw the further development of the novel vaccine candidate. However, in vivo assays are rarely, if at all, optimized and validated. Here we clearly demonstrate the importance of in vivo assay (mumps virus immunogenicity testing in guinea pigs) optimization for gaining reliable results and the suitability of Fractional factorial design of experiments (DoE) for such a purpose. By the use of DoE with resolution IV (2IV((4-1))) we clearly revealed that the parameters significantly increasing assay sensitivity were interval between animal immunizations followed by the body weight of experimental animals. The quantity (0 versus 2%) of the stabilizer (fetal bovine serum, FBS) in the sample was shown as non-influencing parameter in DoE setup. However, the separate experiment investigating only the FBS influence, and performed under other parameters optimally set, showed that FBS also influences the results of immunogenicity assay. Such finding indicated that (a) factors with strong influence on the measured outcome can hide the effects of parameters with modest/low influence and (b) the matrix of mumps virus samples to be compared for immunogenicity must be identical for reliable virus immunogenicity comparison. Finally the 3 mumps vaccine strains widely used for decades in the licensed vaccines were for the first time compared in an animal model, and results obtained were in line with their reported immunogenicity in human population supporting the predictive power of the optimized in vivo assay.

  17. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  18. Profiling optimization for big data transfer over dedicated channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, D.; Wu, Qishi; Rao, Nageswara S

    The transfer of big data is increasingly supported by dedicated channels in high-performance networks, where transport protocols play an important role in maximizing applicationlevel throughput and link utilization. The performance of transport protocols largely depend on their control parameter settings, but it is prohibitively time consuming to conduct an exhaustive search in a large parameter space to find the best set of parameter values. We propose FastProf, a stochastic approximation-based transport profiler, to quickly determine the optimal operational zone of a given data transfer protocol/method over dedicated channels. We implement and test the proposed method using both emulations based onmore » real-life performance measurements and experiments over physical connections with short (2 ms) and long (380 ms) delays. Both the emulation and experimental results show that FastProf significantly reduces the profiling overhead while achieving a comparable level of end-to-end throughput performance with the exhaustive search-based approach.« less

  19. Optimization of intermolecular potential parameters for the CO2/H2O mixture.

    PubMed

    Orozco, Gustavo A; Economou, Ioannis G; Panagiotopoulos, Athanassios Z

    2014-10-02

    Monte Carlo simulations in the Gibbs ensemble were used to obtain optimized intermolecular potential parameters to describe the phase behavior of the mixture CO2/H2O, over a range of temperatures and pressures relevant for carbon capture and sequestration processes. Commonly used fixed-point-charge force fields that include Lennard-Jones 12-6 (LJ) or exponential-6 (Exp-6) terms were used to describe CO2 and H2O intermolecular interactions. For force fields based on the LJ functional form, changes of the unlike interactions produced higher variations in the H2O-rich phase than in the CO2-rich phase. A major finding of the present study is that for these potentials, no combination of unlike interaction parameters is able to adequately represent properties of both phases. Changes to the partial charges of H2O were found to produce significant variations in both phases and are able to fit experimental data in both phases, at the cost of inaccuracies for the pure H2O properties. By contrast, for the Exp-6 case, optimization of a single parameter, the oxygen-oxygen unlike-pair interaction, was found sufficient to give accurate predictions of the solubilities in both phases while preserving accuracy in the pure component properties. These models are thus recommended for future molecular simulation studies of CO2/H2O mixtures.

  20. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  1. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  2. Spectral embedding finds meaningful (relevant) structure in image and microarray data

    PubMed Central

    Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L

    2006-01-01

    Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359

  3. Multi-parameter optimization of piezoelectric actuators for multi-mode active vibration control of cylindrical shells

    NASA Astrophysics Data System (ADS)

    Hu, K. M.; Li, Hua

    2018-07-01

    A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.

  4. Optimal Estimation of Clock Values and Trends from Finite Data

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    2005-01-01

    We show how to solve two problems of optimal linear estimation from a finite set of phase data. Clock noise is modeled as a stochastic process with stationary dth increments. The covariance properties of such a process are contained in the generalized autocovariance function (GACV). We set up two principles for optimal estimation: with the help of the GACV, these principles lead to a set of linear equations for the regression coefficients and some auxiliary parameters. The mean square errors of the estimators are easily calculated. The method can be used to check the results of other methods and to find good suboptimal estimators based on a small subset of the available data.

  5. Sensitivity analysis of multi-objective optimization of CPG parameters for quadruped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2012-09-01

    In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.

  6. Adaptive Particle Swarm Optimizer with Varying Acceleration Coefficients for Finding the Most Stable Conformer of Small Molecules.

    PubMed

    Agrawal, Shikha; Silakari, Sanjay; Agrawal, Jitendra

    2015-11-01

    A novel parameter automation strategy for Particle Swarm Optimization called APSO (Adaptive PSO) is proposed. The algorithm is designed to efficiently control the local search and convergence to the global optimum solution. Parameters c1 controls the impact of the cognitive component on the particle trajectory and c2 controls the impact of the social component. Instead of fixing the value of c1 and c2 , this paper updates the value of these acceleration coefficients by considering time variation of evaluation function along with varying inertia weight factor in PSO. Here the maximum and minimum value of evaluation function is use to gradually decrease and increase the value of c1 and c2 respectively. Molecular energy minimization is one of the most challenging unsolved problems and it can be formulated as a global optimization problem. The aim of the present paper is to investigate the effect of newly developed APSO on the highly complex molecular potential energy function and to check the efficiency of the proposed algorithm to find the global minimum of the function under consideration. The proposed algorithm APSO is therefore applied in two cases: Firstly, for the minimization of a potential energy of small molecules with up to 100 degrees of freedom and finally for finding the global minimum energy conformation of 1,2,3-trichloro-1-flouro-propane molecule based on a realistic potential energy function. The computational results of all the cases show that the proposed method performs significantly better than the other algorithms. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. An improved hierarchical A * algorithm in the optimization of parking lots

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    In the parking lot parking path optimization, the traditional evaluation index is the shortest distance as the best index and it does not consider the actual road conditions. Now, the introduction of a more practical evaluation index can not only simplify the hardware design of the boot system but also save the software overhead. Firstly, we establish the parking lot network graph RPCDV mathematical model and all nodes in the network is divided into two layers which were constructed using different evaluation function base on the improved hierarchical A * algorithm which improves the time optimal path search efficiency and search precision of the evaluation index. The final results show that for different sections of the program attribute parameter algorithm always faster the time to find the optimal path.

  8. Review on applications of artificial intelligence methods for dam and reservoir-hydro-environment models.

    PubMed

    Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed

    2018-05-01

    Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.

  9. Tuning the heat transfer medium and operating conditions in magnetic refrigeration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghahremani, Mohammadreza, E-mail: mghahrem@shepherd.edu; Dept. of Electrical and Computer Engineering, The George Washington University, Washington DC 20052; Aslani, Amir

    A new experimental test bed has been designed, built, and tested to evaluate the effect of the system’s parameters on a reciprocating Active Magnetic Regenerator (AMR) near room temperature. Bulk gadolinium was used as the refrigerant, silicon oil as the heat transfer medium, and a magnetic field of 1.3 T was cycled. This study focuses on the methodology of single stage AMR operation conditions to get a high temperature span near room temperature. Herein, the main objective is not to report the absolute maximum attainable temperature span seen in an AMR system, but rather to find the system’s optimal operatingmore » conditions to reach that maximum span. The results of this research show that there is a optimal operating frequency, heat transfer fluid flow rate, flow duration, and displaced volume ratio in any AMR system. By optimizing these parameters in our AMR apparatus the temperature span between the hot and cold ends increased by 24%. The optimized values are system dependent and need to be determined and measured for any AMR system by following the procedures that are introduced in this research. It is expected that such optimization will permit the design of a more efficient magnetic refrigeration system.« less

  10. Are quantitative sensitivity analysis methods always reliable?

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2016-12-01

    Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.

  11. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  12. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  13. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  14. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  15. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  16. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  17. Optimizing physical energy functions for protein folding.

    PubMed

    Fujitsuka, Yoshimi; Takada, Shoji; Luthey-Schulten, Zaida A; Wolynes, Peter G

    2004-01-01

    We optimize a physical energy function for proteins with the use of the available structural database and perform three benchmark tests of the performance: (1) recognition of native structures in the background of predefined decoy sets of Levitt, (2) de novo structure prediction using fragment assembly sampling, and (3) molecular dynamics simulations. The energy parameter optimization is based on the energy landscape theory and uses a Monte Carlo search to find a set of parameters that seeks the largest ratio deltaE(s)/DeltaE for all proteins in a training set simultaneously. Here, deltaE(s) is the stability gap between the native and the average in the denatured states and DeltaE is the energy fluctuation among these states. Some of the energy parameters optimized are found to show significant correlation with experimentally observed quantities: (1) In the recognition test, the optimized function assigns the lowest energy to either the native or a near-native structure among many decoy structures for all the proteins studied. (2) Structure prediction with the fragment assembly sampling gives structure models with root mean square deviation less than 6 A in one of the top five cluster centers for five of six proteins studied. (3) Structure prediction using molecular dynamics simulation gives poorer performance, implying the importance of having a more precise description of local structures. The physical energy function solely inferred from a structural database neither utilizes sequence information from the family of the target nor the outcome of the secondary structure prediction but can produce the correct native fold for many small proteins. Copyright 2003 Wiley-Liss, Inc.

  18. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    NASA Astrophysics Data System (ADS)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  19. Red Cell Properties after Different Modes of Blood Transportation

    PubMed Central

    Makhro, Asya; Huisjes, Rick; Verhagen, Liesbeth P.; Mañú-Pereira, María del Mar; Llaudet-Planas, Esther; Petkova-Kirova, Polina; Wang, Jue; Eichler, Hermann; Bogdanova, Anna; van Wijk, Richard; Vives-Corrons, Joan-Lluís; Kaestner, Lars

    2016-01-01

    Transportation of blood samples is unavoidable for assessment of specific parameters in blood of patients with rare anemias, blood doping testing, or for research purposes. Despite the awareness that shipment may substantially alter multiple parameters, no study of that extent has been performed to assess these changes and optimize shipment conditions to reduce transportation-related artifacts. Here we investigate the changes in multiple parameters in blood of healthy donors over 72 h of simulated shipment conditions. Three different anticoagulants (K3EDTA, Sodium Heparin, and citrate-based CPDA) for two temperatures (4°C and room temperature) were tested to define the optimal transportation conditions. Parameters measured cover common cytology and biochemistry parameters (complete blood count, hematocrit, morphological examination), red blood cell (RBC) volume, ion content and density, membrane properties and stability (hemolysis, osmotic fragility, membrane heat stability, patch-clamp investigations, and formation of micro vesicles), Ca2+ handling, RBC metabolism, activity of numerous enzymes, and O2 transport capacity. Our findings indicate that individual sets of parameters may require different shipment settings (anticoagulants, temperature). Most of the parameters except for ion (Na+, K+, Ca2+) handling and, possibly, reticulocytes counts, tend to favor transportation at 4°C. Whereas plasma and intraerythrocytic Ca2+ cannot be accurately measured in the presence of chelators such as citrate and EDTA, the majority of Ca2+-dependent parameters are stabilized in CPDA samples. Even in blood samples from healthy donors transported using an optimized shipment protocol, the majority of parameters were stable within 24 h, a condition that may not hold for the samples of patients with rare anemias. This implies for as short as possible shipping using fast courier services to the closest expert laboratory at reach. Mobile laboratories or the travel of the patients to the specialized laboratories may be the only option for some groups of patients with highly unstable RBCs. PMID:27471472

  20. A Reduced Order Model of Force Displacement Curves for the Failure of Mechanical Bolts in Tension.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Keegan J.; Sandia National Lab.; Brake, Matthew Robert

    2015-12-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry causes issues when generating a mesh of the model. This report will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. Themore » results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.« less

  1. Parameters optimization for the energy management system of hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Tseng, Chyuan-Yow; Hung, Yi-Hsuan; Tsai, Chien-Hsiung; Huang, Yu-Jen

    2007-12-01

    Hybrid electric vehicle (HEV) has been widely studied recently due to its high potential in reduction of fuel consumption, exhaust emission, and lower noise. Because of comprised of two power sources, the HEV requires an energy management system (EMS) to distribute optimally the power sources for various driving conditions. The ITRI in Taiwan has developed a HEV consisted of a 2.2L internal combustion engine (ICE), a 18KW motor/generator (M/G), a 288V battery pack, and a continuous variable transmission (CVT). The task of the present study is to design an energy management strategy of the EMS for the HEV. Due to the nonlinear nature and the fact of unknown system model of the system, a kind of simplex method based energy management strategy is proposed for the HEV system. The simplex method is a kind of optimization strategy which is generally used to find out the optimal parameters for un-modeled systems. The way to apply the simplex method for the design of the EMS is presented. The feasibility of the proposed method was verified by perform numerical simulation on the FTP75 drive cycles.

  2. Study on key technologies of optimization of big data for thermal power plant performance

    NASA Astrophysics Data System (ADS)

    Mao, Mingyang; Xiao, Hong

    2018-06-01

    Thermal power generation accounts for 70% of China's power generation, the pollutants accounted for 40% of the same kind of emissions, thermal power efficiency optimization needs to monitor and understand the whole process of coal combustion and pollutant migration, power system performance data show explosive growth trend, The purpose is to study the integration of numerical simulation of big data technology, the development of thermal power plant efficiency data optimization platform and nitrogen oxide emission reduction system for the thermal power plant to improve efficiency, energy saving and emission reduction to provide reliable technical support. The method is big data technology represented by "multi-source heterogeneous data integration", "large data distributed storage" and "high-performance real-time and off-line computing", can greatly enhance the energy consumption capacity of thermal power plants and the level of intelligent decision-making, and then use the data mining algorithm to establish the boiler combustion mathematical model, mining power plant boiler efficiency data, combined with numerical simulation technology to find the boiler combustion and pollutant generation rules and combustion parameters of boiler combustion and pollutant generation Influence. The result is to optimize the boiler combustion parameters, which can achieve energy saving.

  3. Application of grey-fuzzy approach in parametric optimization of EDM process in machining of MDN 300 steel

    NASA Astrophysics Data System (ADS)

    Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.

    2018-01-01

    Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.

  4. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  5. Diagnostic Algorithm to Reflect Regressive Changes of Human Papilloma Virus in Tissue Biopsies

    PubMed Central

    Lhee, Min Jin; Cha, Youn Jin; Bae, Jong Man; Kim, Young Tae

    2014-01-01

    Purpose Landmark indicators have not yet to be developed to detect the regression of cervical intraepithelial neoplasia (CIN). We propose that quantitative viral load and indicative histological criteria can be used to differentiate between atypical squamous cells of undetermined significance (ASCUS) and a CIN of grade 1. Materials and Methods We collected 115 tissue biopsies from women who tested positive for the human papilloma virus (HPV). Nine morphological parameters including nuclear size, perinuclear halo, hyperchromasia, typical koilocyte (TK), abortive koilocyte (AK), bi-/multi-nucleation, keratohyaline granules, inflammation, and dyskeratosis were examined for each case. Correlation analyses, cumulative logistic regression, and binary logistic regression were used to determine optimal cut-off values of HPV copy numbers. The parameters TK, perinuclear halo, multi-nucleation, and nuclear size were significantly correlated quantitatively to HPV copy number. Results An HPV loading number of 58.9 and AK number of 20 were optimal to discriminate between negative and subtle findings in biopsies. An HPV loading number of 271.49 and AK of 20 were optimal for discriminating between equivocal changes and obvious koilocytosis. Conclusion We propose that a squamous epithelial lesion with AK of >20 and quantitative HPV copy number between 58.9-271.49 represents a new spectrum of subtle pathological findings, characterized by AK in ASCUS. This can be described as a distinct entity and called "regressing koilocytosis". PMID:24532500

  6. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  7. Throughput and latency programmable optical transceiver by using DSP and FEC control.

    PubMed

    Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki

    2017-05-15

    We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.

  8. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2017-08-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2 )-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  9. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2018-04-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2)-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  10. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  11. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  12. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  13. Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  14. Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  15. Optimizing the motion of a folding molecular motor in soft matter.

    PubMed

    Rajonson, Gabriel; Ciobotarescu, Simona; Teboul, Victor

    2018-04-18

    We use molecular dynamics simulations to investigate the displacement of a periodically folding molecular motor in a viscous environment. Our aim is to find significant parameters to optimize the displacement of the motor. We find that the choice of a massy host or of small host molecules significantly increase the motor displacements. While in the same environment, the motor moves with hopping solid-like motions while the host moves with diffusive liquid-like motions, a result that originates from the motor's larger size. Due to hopping motions, there are thresholds on the force necessary for the motor to reach stable positions in the medium. These force thresholds result in a threshold in the size of the motor to induce a significant displacement, that is followed by plateaus in the motor displacement.

  16. Intracellular response to process optimization and impact on productivity and product aggregates for a high-titer CHO cell process.

    PubMed

    Handlogten, Michael W; Lee-O'Brien, Allison; Roy, Gargi; Levitskaya, Sophia V; Venkat, Raghavan; Singh, Shailendra; Ahuja, Sanjeev

    2018-01-01

    A key goal in process development for antibodies is to increase productivity while maintaining or improving product quality. During process development of an antibody, titers were increased from 4 to 10 g/L while simultaneously decreasing aggregates. Process development involved optimization of media and feed formulations, feed strategy, and process parameters including pH and temperature. To better understand how CHO cells respond to process changes, the changes were implemented in a stepwise manner. The first change was an optimization of the feed formulation, the second was an optimization of the medium, and the third was an optimization of process parameters. Multiple process outputs were evaluated including cell growth, osmolality, lactate production, ammonium concentration, antibody production, and aggregate levels. Additionally, detailed assessment of oxygen uptake, nutrient and amino acid consumption, extracellular and intracellular redox environment, oxidative stress, activation of the unfolded protein response (UPR) pathway, protein disulfide isomerase (PDI) expression, and heavy and light chain mRNA expression provided an in-depth understanding of the cellular response to process changes. The results demonstrate that mRNA expression and UPR activation were unaffected by process changes, and that increased PDI expression and optimized nutrient supplementation are required for higher productivity processes. Furthermore, our findings demonstrate the role of extra- and intracellular redox environment on productivity and antibody aggregation. Processes using the optimized medium, with increased concentrations of redox modifying agents, had the highest overall specific productivity, reduced aggregate levels, and helped cells better withstand the high levels of oxidative stress associated with increased productivity. Specific productivities of different processes positively correlated to average intracellular values of total glutathione. Additionally, processes with the optimized media maintained an oxidizing intracellular environment, important for correct disulfide bond pairing, which likely contributed to reduced aggregate formation. These findings shed important understanding into how cells respond to process changes and can be useful to guide future development efforts to enhance productivity and improve product quality. © 2017 Wiley Periodicals, Inc.

  17. Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour

    PubMed Central

    Ram, Gopi; Mandal, Durbadal; Kar, Rajib; Ghoshal, Sakti Prasad

    2013-01-01

    A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB. PMID:23970843

  18. Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin

    2017-09-01

    Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.

  19. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  20. Self-tuning bistable parametric feedback oscillator: Near-optimal amplitude maximization without model information

    NASA Astrophysics Data System (ADS)

    Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu

    2017-01-01

    Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.

  1. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  2. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    PubMed Central

    Goto, Hayato

    2016-01-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997

  3. Motion prediction of a non-cooperative space target

    NASA Astrophysics Data System (ADS)

    Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan

    2018-01-01

    Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.

  4. Data analysis of gravitational-wave signals from spinning neutron stars. III. Detection statistics and computational requirements

    NASA Astrophysics Data System (ADS)

    Jaranowski, Piotr; Królak, Andrzej

    2000-03-01

    We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.

  5. Technical Parameters Modeling of a Gas Probe Foaming Using an Active Experimental Type Research

    NASA Astrophysics Data System (ADS)

    Tîtu, A. M.; Sandu, A. V.; Pop, A. B.; Ceocea, C.; Tîtu, S.

    2018-06-01

    The present paper deals with a current and complex topic, namely - a technical problem solving regarding the modeling and then optimization of some technical parameters related to the natural gas extraction process. The study subject is to optimize the gas probe sputtering using experimental research methods and data processing by regular probe intervention with different sputtering agents. This procedure makes that the hydrostatic pressure to be reduced by the foam formation from the water deposit and the scrubbing agent which can be removed from the surface by the produced gas flow. The probe production data was analyzed and the so-called candidate for the research itself emerged. This is an extremely complex study and it was carried out on the field works, finding that due to the severe gas field depletion the wells flow decreases and the start of their loading with deposit water, was registered. It was required the regular wells foaming, to optimize the daily production flow and the disposal of the wellbore accumulated water. In order to analyze the process of natural gas production, the factorial experiment and other methods were used. The reason of this choice is that the method can offer very good research results with a small number of experimental data. Finally, through this study the extraction process problems were identified by analyzing and optimizing the technical parameters, which led to a quality improvement of the extraction process.

  6. Optimization of Coolant Technique Conditions for Machining A319 Aluminium Alloy Using Response Surface Method (RSM)

    NASA Astrophysics Data System (ADS)

    Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.

    2018-03-01

    Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.

  7. Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com

    We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less

  8. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  9. Calibration of DEM parameters on shear test experiments using Kriging method

    NASA Astrophysics Data System (ADS)

    Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier

    2017-06-01

    Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.

  10. Optimization of Acid Orange 7 Degradation in Heterogeneous Fenton-like Reaction Using Fe3-xCoxO4 Catalyst

    NASA Astrophysics Data System (ADS)

    Ibrahim, M. Z.; Alrozi, R.; Zubir, N. A.; Bashah, N. A.; Ali, S. A. Md; Ibrahim, N.

    2018-05-01

    The oxidation process such as heterogeneous Fenton and/or Fenton-like reactions is considered as an effective and efficient method for treatment of dye degradation. In this study, the degradation of Acid Orange 7 (AO7) was investigated by using Fe3-xCoxO4 as a heterogeneous Fenton-like catalyst. Response surface methodology (RSM) was used to optimize the operational parameters condition and the interaction of two or more parameters. The parameter studies were catalyst dosage (X1 ), pH (X2 ) and H2O2 concentration (X3 ) towards AO7 degradation. Based on analysis of variance (ANOVA), the derived quadratic polynomial model was significant whereby the predicted values matched the experimental values with regression coefficient of R2 = 0.9399. The optimum condition for AO7 degradation was obtained at catalyst dosage of 0.84 g/L, pH of 3 and H2O2 concentration of 46.70 mM which resulted in 86.30% removal of AO7 dye. These findings present new insights into the influence of operational parameters in the heterogeneous Fenton-like oxidation of AO7 using Fe3-xCoxO4 catalyst.

  11. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties

    PubMed Central

    Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073

  12. Carbon dioxide emission prediction using support vector machine

    NASA Astrophysics Data System (ADS)

    Saleh, Chairul; Rachman Dzakiyullah, Nur; Bayu Nugroho, Jonathan

    2016-02-01

    In this paper, the SVM model was proposed for predict expenditure of carbon (CO2) emission. The energy consumption such as electrical energy and burning coal is input variable that affect directly increasing of CO2 emissions were conducted to built the model. Our objective is to monitor the CO2 emission based on the electrical energy and burning coal used from the production process. The data electrical energy and burning coal used were obtained from Alcohol Industry in order to training and testing the models. It divided by cross-validation technique into 90% of training data and 10% of testing data. To find the optimal parameters of SVM model was used the trial and error approach on the experiment by adjusting C parameters and Epsilon. The result shows that the SVM model has an optimal parameter on C parameters 0.1 and 0 Epsilon. To measure the error of the model by using Root Mean Square Error (RMSE) with error value as 0.004. The smallest error of the model represents more accurately prediction. As a practice, this paper was contributing for an executive manager in making the effective decision for the business operation were monitoring expenditure of CO2 emission.

  13. Modeling for the optimal biodegradation of toxic wastewater in a discontinuous reactor.

    PubMed

    Betancur, Manuel J; Moreno-Andrade, Iván; Moreno, Jaime A; Buitrón, Germán; Dochain, Denis

    2008-06-01

    The degradation of toxic compounds in Sequencing Batch Reactors (SBRs) poses inhibition problems. Time Optimal Control (TOC) methods may be used to avoid such inhibition thus exploiting the maximum capabilities of this class of reactors. Biomass and substrate online measurements, however, are usually unavailable for wastewater applications, so TOC must use only related variables as dissolved oxygen and volume. Although the standard mathematical model to describe the reaction phase of SBRs is good enough for explaining its general behavior in uncontrolled batch mode, better details are needed to model its dynamics when the reactor operates near the maximum degradation rate zone, as when TOC is used. In this paper two improvements to the model are suggested: to include the sensor delay effects and to modify the classical Haldane curve in a piecewise manner. These modifications offer a good solution for a reasonable complexification tradeoff. Additionally, a new way to look at the Haldane K-parameters (micro(o),K(I),K(S)) is described, the S-parameters (micro*,S*,S(m)). These parameters do have a clear physical meaning and, unlike the K-parameters, allow for the statistical treatment to find a single model to fit data from multiple experiments.

  14. The effect of axial ion parameters on the properties of glow discharge polymer in T2B/H2 plasma

    NASA Astrophysics Data System (ADS)

    Ai, Xing; He, Xiao-Shan; Huang, Jing-Lin; He, Zhi-Bing; Du, Kai; Chen, Guo

    2018-03-01

    Glow discharge polymer (GDP) films were fabricated using plasma-enhanced chemical vapor deposition. The main purpose of this work was to explore the correlations of plasma parameters with the surface morphology and chemical structure of GDP films. The intensities of main positive ions and ion energy as functions of axial distances in T2B/H2 plasma were diagnosed using energy-resolved mass spectrometry. The surface morphology and chemical structure were characterized as functions of axial distances using a scanning electron microscope and Fourier transform infrared spectroscopy, respectively. As the axial distance increases, both the intensities of positive ions and high energy ions decreases, and dissociation weakens while polymerization enhances. This leads to the weakening of the cross-linking structure of GDP films and the formation of dome defects on films. Additionally, high energy ions could introduce a strong etching effect to form etching pits. Therefore, an axial distance of about 20 mm was found to be the optimal plasma parameter to prepare the defect-free GDP films. These results could help one to find the optimal plasma parameters for GDP film deposition.

  15. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties.

    PubMed

    Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.

  16. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  17. Optimization in modeling the ribs-bounded contour from computer tomography scan

    NASA Astrophysics Data System (ADS)

    Bilinskas, M. J.; Dzemyda, G.

    2016-10-01

    In this paper a method for analyzing transversal plane images from computer tomography scans is presented. A mathematical model that describes the ribs-bounded contour was created and the problem of approximation is solved by finding out the optimal parameters of the model in the least-squares sense. Such model would be useful in registration of images independently on the patient position on the bed and on the radio-contrast agent injection. We consider the slices, where ribs are visible, because many important internal organs are located here: liver, heart, stomach, pancreas, lung, etc.

  18. Forecasting peaks of seasonal influenza epidemics.

    PubMed

    Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John

    2013-06-21

    We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.

  19. Automatic Phase Picker for Local and Teleseismic Events Using Wavelet Transform and Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Gaillot, P.; Bardaine, T.; Lyon-Caen, H.

    2004-12-01

    Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro-seismicity data from glacier monitoring. The developed method is discussed and tested using our wavelet version of the standard STA-LTA algorithm.

  20. Study on loading path optimization of internal high pressure forming process

    NASA Astrophysics Data System (ADS)

    Jiang, Shufeng; Zhu, Hengda; Gao, Fusheng

    2017-09-01

    In the process of internal high pressure forming, there is no formula to describe the process parameters and forming results. The article use numerical simulation to obtain several input parameters and corresponding output result, use the BP neural network to found their mapping relationship, and with weighted summing method make each evaluating parameters to set up a formula which can evaluate quality. Then put the training BP neural network into the particle swarm optimization, and take the evaluating formula of the quality as adapting formula of particle swarm optimization, finally do the optimization and research at the range of each parameters. The results show that the parameters obtained by the BP neural network algorithm and the particle swarm optimization algorithm can meet the practical requirements. The method can solve the optimization of the process parameters in the internal high pressure forming process.

  1. Approximate approach for optimization space flights with a low thrust on the basis of sufficient optimality conditions

    NASA Astrophysics Data System (ADS)

    Salmin, Vadim V.

    2017-01-01

    Flight mechanics with a low-thrust is a new chapter of mechanics of space flight, considered plurality of all problems trajectory optimization and movement control laws and the design parameters of spacecraft. Thus tasks associated with taking into account the additional factors in mathematical models of the motion of spacecraft becomes increasingly important, as well as additional restrictions on the possibilities of the thrust vector control. The complication of the mathematical models of controlled motion leads to difficulties in solving optimization problems. Author proposed methods of finding approximate optimal control and evaluating their optimality based on analytical solutions. These methods are based on the principle of extending the class of admissible states and controls and sufficient conditions for the absolute minimum. Developed procedures of the estimation enabling to determine how close to the optimal founded solution, and indicate ways to improve them. Authors describes procedures of estimate for approximately optimal control laws for space flight mechanics problems, in particular for optimization flight low-thrust between the circular non-coplanar orbits, optimization the control angle and trajectory movement of the spacecraft during interorbital flights, optimization flights with low-thrust between arbitrary elliptical orbits Earth satellites.

  2. Nuclear Electric Vehicle Optimization Toolset (NEVOT): Integrated System Design Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg

    2003-01-01

    The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.

  3. A rapid and low energy consumption method to decolorize the high concentration triphenylmethane dye wastewater: operational parameters optimization for the ultrasonic-assisted ozone oxidation process.

    PubMed

    Zhou, Xian-Jiao; Guo, Wan-Qian; Yang, Shan-Shan; Ren, Nan-Qi

    2012-02-01

    This research set up an ultrasonic-assisted ozone oxidation process (UAOOP) to decolorize the triphenylmethane dyes wastewater. Five factors - temperature, initial pH, reaction time, ultrasonic power (low frequency 20 kHz), and ozone concentration - were investigated. Response surface methodology was used to find out the major factors influencing color removal rate and the interactions between these factors, and optimized the operating parameters as well. Under the experimental conditions: reaction temperature 39.81 °C, initial pH 5.29, ultrasonic power 60 W and ozone concentration 0.17 g/L, the highest color removals were achieved with 10 min reaction time and the initial concentration of the MG solution was 1000 mg/L. The optimal results indicated that the UAOOP was a rapid, efficient and low energy consumption technique to decolorize the high concentration MG wastewater. The predicted model was approximately in accordance with the experimental cases with correlation coefficients R(2) and R(adj)(2) of 0.9103 and 0.8386. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  4. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    NASA Astrophysics Data System (ADS)

    Schmitt, Oliver; Steinmann, Paul

    2018-06-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  5. Complex motion measurement using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Jianjun; Tu, Dan; Shen, Zhenkang

    1997-12-01

    Genetic algorithm (GA) is an optimization technique that provides an untraditional approach to deal with many nonlinear, complicated problems. The notion of motion measurement using genetic algorithm arises from the fact that the motion measurement is virtually an optimization process based on some criterions. In the paper, we propose a complex motion measurement method using genetic algorithm based on block-matching criterion. The following three problems are mainly discussed and solved in the paper: (1) apply an adaptive method to modify the control parameters of GA that are critical to itself, and offer an elitism strategy at the same time (2) derive an evaluate function of motion measurement for GA based on block-matching technique (3) employ hill-climbing (HC) method hybridly to assist GA's search for the global optimal solution. Some other related problems are also discussed. At the end of paper, experiments result is listed. We employ six motion parameters for measurement in our experiments. Experiments result shows that the performance of our GA is good. The GA can find the object motion accurately and rapidly.

  6. Optimization of Micro Metal Injection Molding By Using Grey Relational Grade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, M. H. I.; Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia; Muhamad, N.

    2011-01-17

    Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising method towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is applied to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on optimization of process parameter where Taguchi method associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the optimization conversionmore » from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.« less

  7. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms

    PubMed Central

    Hernández-Ocaña, Betania; Pozos-Parra, Ma. Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem. PMID:27057156

  8. Two-Swim Operators in the Modified Bacterial Foraging Algorithm for the Optimal Synthesis of Four-Bar Mechanisms.

    PubMed

    Hernández-Ocaña, Betania; Pozos-Parra, Ma Del Pilar; Mezura-Montes, Efrén; Portilla-Flores, Edgar Alfredo; Vega-Alvarado, Eduardo; Calva-Yáñez, Maria Bárbara

    2016-01-01

    This paper presents two-swim operators to be added to the chemotaxis process of the modified bacterial foraging optimization algorithm to solve three instances of the synthesis of four-bar planar mechanisms. One swim favors exploration while the second one promotes fine movements in the neighborhood of each bacterium. The combined effect of the new operators looks to increase the production of better solutions during the search. As a consequence, the ability of the algorithm to escape from local optimum solutions is enhanced. The algorithm is tested through four experiments and its results are compared against two BFOA-based algorithms and also against a differential evolution algorithm designed for mechanical design problems. The overall results indicate that the proposed algorithm outperforms other BFOA-based approaches and finds highly competitive mechanisms, with a single set of parameter values and with less evaluations in the first synthesis problem, with respect to those mechanisms obtained by the differential evolution algorithm, which needed a parameter fine-tuning process for each optimization problem.

  9. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    NASA Astrophysics Data System (ADS)

    Schmitt, Oliver; Steinmann, Paul

    2017-09-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  10. Central composite design optimization of pilot plant fluidized-bed heterogeneous Fenton process for degradation of an azo dye.

    PubMed

    Aghdasinia, Hassan; Bagheri, Rasoul; Vahid, Behrouz; Khataee, Alireza

    2016-11-01

    Optimization of Acid Yellow 36 (AY36) degradation by heterogeneous Fenton process in a recirculated fluidized-bed reactor was studied using central composite design (CCD). Natural pyrite was applied as the catalyst characterized by X-ray diffraction and scanning electron microscopy. The CCD model was developed for the estimation of degradation efficiency as a function of independent operational parameters including hydrogen peroxide concentration (0.5-2.5 mmol/L), initial AY36 concentration (5-25 mg/L), pH (3-9) and catalyst dosage (0.4-1.2 mg/L). The obtained data from the model are in good agreement with the experimental data (R(2 )= 0.964). Moreover, this model is applicable not only to determine the optimized experimental conditions for maximum AY36 degradation, but also to find individual and interactive effects of the mentioned parameters. Finally, gas chromatography-mass spectroscopy (GC-MS) was utilized for the identification of some degradation intermediates and a plausible degradation pathway was proposed.

  11. Techno-economical optimization of Reactive Blue 19 removal by combined electrocoagulation/coagulation process through MOPSO using RSM and ANFIS models.

    PubMed

    Taheri, M; Alavi Moghaddam, M R; Arami, M

    2013-10-15

    In this research, Response Surface Methodology (RSM) and Adaptive Neuro Fuzzy Inference System (ANFIS) models were applied for optimization of Reactive Blue 19 removal using combined electrocoagulation/coagulation process through Multi-Objective Particle Swarm Optimization (MOPSO). By applying RSM, the effects of five independent parameters including applied current, reaction time, initial dye concentration, initial pH and dosage of Poly Aluminum Chloride were studied. According to the RSM results, all the independent parameters are equally important in dye removal efficiency. In addition, ANFIS was applied for dye removal efficiency and operating costs modeling. High R(2) values (≥85%) indicate that the predictions of RSM and ANFIS models are acceptable for both responses. ANFIS was also used in MOPSO for finding the best techno-economical Reactive Blue 19 elimination conditions according to RSM design. Through MOPSO and the selected ANFIS model, Minimum and maximum values of 58.27% and 99.67% dye removal efficiencies were obtained, respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Optimization of the parameters for obtaining zirconia-alumina coatings, made by flame spraying from results of numerical simulation

    NASA Astrophysics Data System (ADS)

    Ferrer, M.; Vargas, F.; Peña, G.

    2017-12-01

    The K-Sommerfeld values (K) and the melting percentage (% F) obtained by numerical simulation using the Jets et Poudres software were used to find the projection parameters of zirconia-alumina coatings by thermal spraying flame, in order to obtain coatings with good morphological and structural properties to be used as thermal insulation. The experimental results show the relationship between the Sommerfeld parameter and the porosity of the zirconia-alumina coatings. It is found that the lowest porosity is obtained when the K-Sommerfeld value is close to 45 with an oxidant flame, on the contrary, when superoxidant flames are used K values are close 52, which improve wear resistance.

  13. Chaos minimization in DC-DC boost converter using circuit parameter optimization

    NASA Astrophysics Data System (ADS)

    Sudhakar, N.; Natarajan, Rajasekar; Gourav, Kumar; Padmavathi, P.

    2017-11-01

    DC-DC converters are prone to several types of nonlinear phenomena including bifurcation, quasi periodicity, intermittency and chaos. These undesirable effects must be controlled for periodic operation of the converter to ensure the stability. In this paper an effective solution to control of chaos in solar fed DC-DC boost converter is proposed. Controlling of chaos is significantly achieved using optimal circuit parameters obtained through Bacterial Foraging Optimization Algorithm. The optimization renders the suitable parameters in minimum computational time. The obtained results are compared with the operation of traditional boost converter. Further the obtained results with BFA optimized parameter ensures the operations of the converter are within the controllable region. To elaborate the study of bifurcation analysis with optimized and unoptimized parameters are also presented.

  14. IMNN: Information Maximizing Neural Networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  15. Investigation of statistical iterative reconstruction for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Glick, Stephen J.

    2013-01-01

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue. Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters. Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose. Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose. PMID:23927318

  16. Proceedings of the Workshop on Identification and Control of Flexible Space Structures, Volume 3

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1985-01-01

    The results of a workshop on identification and control of flexible space structures are reported. This volume deals mainly with control theory and methodologies as they apply to space stations and large antennas. Integration and dynamics and control experimental findings are reported. Among the areas of control theory discussed were feedback, optimization, and parameter identification.

  17. Constraints on rapidity-dependent initial conditions from charged-particle pseudorapidity densities and two-particle correlations

    NASA Astrophysics Data System (ADS)

    Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.

    2017-10-01

    We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.

  18. Self-Adaptive Stepsize Search Applied to Optimal Structural Design

    NASA Astrophysics Data System (ADS)

    Nolle, L.; Bland, J. A.

    Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.

  19. Analysis of a Segmented Annular Coplanar Capacitive Tilt Sensor with Increased Sensitivity.

    PubMed

    Guo, Jiahao; Hu, Pengcheng; Tan, Jiubin

    2016-01-21

    An investigation of a segmented annular coplanar capacitor is presented. We focus on its theoretical model, and a mathematical expression of the capacitance value is derived by solving a Laplace equation with Hankel transform. The finite element method is employed to verify the analytical result. Different control parameters are discussed, and each contribution to the capacitance value of the capacitor is obtained. On this basis, we analyze and optimize the structure parameters of a segmented coplanar capacitive tilt sensor, and three models with different positions of the electrode gap are fabricated and tested. The experimental result shows that the model (whose electrode-gap position is 10 mm from the electrode center) realizes a high sensitivity: 0.129 pF/° with a non-linearity of <0.4% FS (full scale of ± 40°). This finding offers plenty of opportunities for various measurement requirements in addition to achieving an optimized structure in practical design.

  20. Correlation pattern recognition: optimal parameters for quality standards control of chocolate marshmallow candy

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina

    2006-08-01

    This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.

  1. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  2. Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro

    We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.

  3. Optimal Bayesian Adaptive Design for Test-Item Calibration.

    PubMed

    van der Linden, Wim J; Ren, Hao

    2015-06-01

    An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.

  4. Application of dragonfly algorithm for optimal performance analysis of process parameters in turn-mill operations- A case study

    NASA Astrophysics Data System (ADS)

    Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT

    2018-02-01

    Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).

  5. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2017-04-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  6. Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz

    NASA Astrophysics Data System (ADS)

    Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao

    2018-05-01

    In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.

  7. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  8. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models

    PubMed Central

    Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907

  9. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models.

    PubMed

    Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .

  10. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  11. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  12. Optimization of microphysics in the Unified Model, using the Micro-genetic algorithm.

    NASA Astrophysics Data System (ADS)

    Jang, J.; Lee, Y.; Lee, H.; Lee, J.; Joo, S.

    2016-12-01

    This study focuses on parameter optimization of microphysics in the Unified Model (UM) using the Micro-genetic algorithm (Micro-GA). We need the optimization of microphysics in UM. Because, Microphysics in the Numerical Weather Prediction (NWP) model is important to Quantitative Precipitation Forecasting (QPF). The Micro-GA searches for optimal parameters on the basis of fitness function. The five parameters are chosen. The target parameters include x1, x2 related to raindrop size distribution, Cloud-rain correlation coefficient, Surface droplet number and Droplet taper height. The fitness function is based on the skill score that is BIAS and Critical Successive Index (CSI). An interface between UM and Micro-GA is developed and applied to three precipitation cases in Korea. The cases are (ⅰ) heavy rainfall in the Southern area because of typhoon NAKRI, (ⅱ) heavy rainfall in the Youngdong area, and (ⅲ) heavy rainfall in the Seoul metropolitan area. When the optimized result is compared to the control result (using the UM default value, CNTL), the optimized result leads to improvements in precipitation forecast, especially for heavy rainfall of the late forecast time. Also, we analyze the skill score of precipitation forecasts in terms of various thresholds of CNTL, Optimized result, and experiments on each optimized parameter for five parameters. Generally, the improvement is maximized when the five optimized parameters are used simultaneously. Therefore, this study demonstrates the ability to improve Korean precipitation forecasts by optimizing microphysics in UM.

  13. A Novel Optimization Technique to Improve Gas Recognition by Electronic Noses Based on the Enhanced Krill Herd Algorithm

    PubMed Central

    Wang, Li; Jia, Pengfei; Huang, Tailai; Duan, Shukai; Yan, Jia; Wang, Lidan

    2016-01-01

    An electronic nose (E-nose) is an intelligent system that we will use in this paper to distinguish three indoor pollutant gases (benzene (C6H6), toluene (C7H8), formaldehyde (CH2O)) and carbon monoxide (CO). The algorithm is a key part of an E-nose system mainly composed of data processing and pattern recognition. In this paper, we employ support vector machine (SVM) to distinguish indoor pollutant gases and two of its parameters need to be optimized, so in order to improve the performance of SVM, in other words, to get a higher gas recognition rate, an effective enhanced krill herd algorithm (EKH) based on a novel decision weighting factor computing method is proposed to optimize the two SVM parameters. Krill herd (KH) is an effective method in practice, however, on occasion, it cannot avoid the influence of some local best solutions so it cannot always find the global optimization value. In addition its search ability relies fully on randomness, so it cannot always converge rapidly. To address these issues we propose an enhanced KH (EKH) to improve the global searching and convergence speed performance of KH. To obtain a more accurate model of the krill behavior, an updated crossover operator is added to the approach. We can guarantee the krill group are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iterations. The recognition results of EKH are compared with those of other optimization algorithms (including KH, chaotic KH (CKH), quantum-behaved particle swarm optimization (QPSO), particle swarm optimization (PSO) and genetic algorithm (GA)), and we can find that EKH is better than the other considered methods. The research results verify that EKH not only significantly improves the performance of our E-nose system, but also provides a good beginning and theoretical basis for further study about other improved krill algorithms’ applications in all E-nose application areas. PMID:27529247

  14. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  15. New methods versus the smart application of existing tools in the design of water distribution network

    NASA Astrophysics Data System (ADS)

    Cisty, Milan; Bajtek, Zbynek; Celar, Lubomir; Soldanova, Veronika

    2017-04-01

    Finding effective ways to build irrigation systems which meet irrigation demands and also achieve positive environmental and economic outcomes requires, among other activities, the development of new modelling tools. Due to the high costs associated with the necessary material and the installation of an irrigation water distribution system (WDS), it is essential to optimize the design of the WDS, while the hydraulic requirements (e.g., the required pressure on irrigation machines) of the network are gratified. In this work an optimal design of a water distribution network is proposed for large irrigation networks. In the present work, a multi-step optimization approach is proposed in such a way that the optimization is accomplished in two phases. In the first phase suboptimal solutions are searched for; in the second phase, the optimization problem is solved with a reduced search space based on these solutions, which significantly supports the finding of an optimal solution. The first phase of the optimization consists of several runs of the NSGA-II, which is applied in this phase by varying its parameters for every run, i.e., changing the population size, the number of generations, and the crossover and mutation parameters. This is done with the aim of obtaining different sub-optimal solutions which have a relatively low cost. These sub-optimal solutions are subsequently used in the second phase of the proposed methodology, in which the final optimization run is built on sub-optimal solutions from the previous phase. The purpose of the second phase is to improve the results of the first phase by searching through the reduced search space. The reduction is based on the minimum and maximum diameters for each pipe from all the networks from the first stage. In this phase, NSGA-II do not consider diameters which are outside of this range. After the NSGA-II second phase computations, the best result published so far for the Balerma benchmark network which was used for methodology testing was achieved in the presented work. The knowledge gained from these computational experiments lies not in offering a new advanced heuristic or hybrid optimization methods of a water distribution network, but in the fact that it is possible to obtain very good results with simple, known methods if they are properly used methodologically. ACKNOWLEDGEMENT This work was supported by the Slovak Research and Development Agency under Contract No. APVV-15-0489 and by the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences, Grant No. 1/0665/15.

  16. Optimization of CO2 laser cutting parameters on Austenitic type Stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Parthiban, A.; Sathish, S.; Chandrasekaran, M.; Ravikumar, R.

    2017-03-01

    Thin AISI 316L stainless steel sheet widely used in sheet metal processing industries for specific applications. CO2 laser cutting is one of the most popular sheet metal cutting processes for cutting of sheets in different profile. In present work various cutting parameters such as laser power (2000 watts-4000 watts), cutting speed (3500mm/min - 5500 mm/min) and assist gas pressure (0.7 Mpa-0.9Mpa) for cutting of AISI 316L 2mm thickness stainless sheet. This experimentation was conducted based on Box-Behenken design. The aim of this work is to develop a mathematical model kerf width for straight and curved profile through response surface methodology. The developed mathematical models for straight and curved profile have been compared. The Quadratic models have the best agreement with experimental data, and also the shape of the profile a substantial role in achieving to minimize the kerf width. Finally the numerical optimization technique has been used to find out best optimum laser cutting parameter for both straight and curved profile cut.

  17. Performance evaluation of Titanium nitride coated tool in turning of mild steel

    NASA Astrophysics Data System (ADS)

    Srinivas, B.; Pramod Kumar, G.; Cheepu, Muralimohan; Jagadeesh, N.; kumar, K. Ravi; Haribabu, S.

    2018-03-01

    The growth in demand for bio-gradable materials is opened as a venue for using vegetable oils, coconut oils etc., as alternate to the conventional coolants for machining operations. At present in manufacturing industries the demand for surface quality is increasing rapidly along with dimensional accuracy and geometric tolerances. The present study is influence of cutting parameters on the surface roughness during the turning of mild steel with TiN coated carbide tool using groundnut oil and soluble oil as coolants. The results showed vegetable gave closer surface finish compares with soluble oil. Cutting parameters has been optimized with Taguchi technique. In this paper, the main objective is to optimize the cutting parameters and reduce surface roughness analogous to increase the tool life by apply the coating on the carbide inserts. The cost of the coating is more, but economically efficient than changing the tools frequently. The plots were generated and analysed to find the relationship between them which are confirmed by performing a comparison study between the predicted results and theoretical results.

  18. Box-Behnken Design of Experiments Investigation of Hydroxyapatite Synthesis for Orthopedic Applications

    NASA Astrophysics Data System (ADS)

    Kehoe, S.; Stokes, J.

    2011-03-01

    Physicochemical properties of hydroxyapatite (HAp) synthesized by the chemical precipitation method are heavily dependent on the chosen process parameters. A Box-Behnken three-level experimental design was therefore, chosen to determine the optimum set of process parameters and their effect on various HAp characteristics. These effects were quantified using design of experiments (DoE) to develop mathematical models using the Box-Behnken design, in terms of the chemical precipitation process parameters. Findings from this research show that the HAp possessing optimum powder characteristics for orthopedic application via a thermal spray technique can therefore be prepared using the following chemical precipitation process parameters: reaction temperature 60 °C, ripening time 48 h, and stirring speed 1500 rpm using high reagent concentrations. Ripening time and stirring speed significantly affected the final phase purity for the experimental conditions of the Box-Behnken design. An increase in both the ripening time (36-48 h) and stirring speed (1200-1500 rpm) was found to result in an increase of phase purity from 47(±2)% to 85(±2)%. Crystallinity, crystallite size, lattice parameters, and mean particle size were also optimized within the research to find desired settings to achieve results suitable for FDA regulations.

  19. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  20. The modification of hybrid method of ant colony optimization, particle swarm optimization and 3-OPT algorithm in traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Hertono, G. F.; Ubadah; Handari, B. D.

    2018-03-01

    The traveling salesman problem (TSP) is a famous problem in finding the shortest tour to visit every vertex exactly once, except the first vertex, given a set of vertices. This paper discusses three modification methods to solve TSP by combining Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO) and 3-Opt Algorithm. The ACO is used to find the solution of TSP, in which the PSO is implemented to find the best value of parameters α and β that are used in ACO.In order to reduce the total of tour length from the feasible solution obtained by ACO, then the 3-Opt will be used. In the first modification, the 3-Opt is used to reduce the total tour length from the feasible solutions obtained at each iteration, meanwhile, as the second modification, 3-Opt is used to reduce the total tour length from the entire solution obtained at every iteration. In the third modification, 3-Opt is used to reduce the total tour length from different solutions obtained at each iteration. Results are tested using 6 benchmark problems taken from TSPLIB by calculating the relative error to the best known solution as well as the running time. Among those modifications, only the second and third modification give satisfactory results except the second one needs more execution time compare to the third modifications.

  1. River bathymetry estimation based on the floodplains topography.

    NASA Astrophysics Data System (ADS)

    Bureš, Luděk; Máca, Petr; Roub, Radek; Pech, Pavel; Hejduk, Tomáš; Novák, Pavel

    2017-04-01

    Topographic model including River bathymetry (bed topography) is required for hydrodynamic simulation, water quality modelling, flood inundation mapping, sediment transport, ecological and geomorphologic assessments. The most common way to create the river bathymetry is to use of the spatial interpolation of discrete points or cross sections data. The quality of the generated bathymetry is dependent on the quality of the measurements, on the used technology and on the size of input dataset. Extensive measurements are often time consuming and expensive. Other option for creating of the river bathymetry is to use the methods of mathematical modelling. In the presented contribution we created the river bathymetry model. Model is based on the analytical curves. The curves are bent into shape of the cross sections. For the best description of the river bathymetry we need to know the values of the model parameters. For finding these parameters we use of the global optimization methods. The global optimization schemes is based on heuristics inspired by the natural processes. We use new type of DE (differential evolution) for finding the solutions of inverse problems, related to the parameters of mathematical model of river bed surfaces. The presented analysis discuss the dependence of model parameters on the selected characteristics. Selected characteristics are: (1) Topographic characteristics (slope and curvature in the left and right floodplains) determined on the base of DTM 5G (digital terrain model). (2) Optimization scheme. (3) Type of used analytical curves. The novel approach is applied on the three parts of Vltava river in Czech Republic. Each part of the river is described on the base of the point field. The point fields was measured with ADCP probe River surveyor M9. This work was supported by the Technology Agency of the Czech Republic, programme Alpha (project TA04020042 - New technologies bathymetry of rivers and reservoirs to determine their storage capacity and monitor the amount and dynamics of sediments) and Internal Grant Agency of Faculty of Environmental Sciences (CULS) (IGA/20164233). Keywords: bathymetry, global optimization, bed topography References: Merwade, Venkatesh. "Effect of spatial trends on interpolation of river bathymetry." Journal of Hydrology, 371.1, 169-181, 2009. Legleiter, Carl J., and Phaedon C. Kyriakidis. Spatial prediction of river channel topography by kriging. Earth Surface Processes and Landforms, 33.6 , 841-867, 2008. P. Maca and P. Pech and and J. Pavlasek. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast. Mathematical Problems in Engineering, vol. 2014, Article ID 782351, 10 pages, 2014. M. Jakubcova and P. Maca and and P. Pech. A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm. Journal of Applied Mathematics, vol. 2014, Article ID 293087, 10 pages, 2014.

  2. Constraining neutron guide optimizations with phase-space considerations

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads; Lefmann, Kim

    2016-09-01

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  3. Real-time parameter optimization based on neural network for smart injection molding

    NASA Astrophysics Data System (ADS)

    Lee, H.; Liau, Y.; Ryu, K.

    2018-03-01

    The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.

  4. Importance of optimizing chromatographic conditions and mass spectrometric parameters for supercritical fluid chromatography/mass spectrometry.

    PubMed

    Fujito, Yuka; Hayakawa, Yoshihiro; Izumi, Yoshihiro; Bamba, Takeshi

    2017-07-28

    Supercritical fluid chromatography/mass spectrometry (SFC/MS) has great potential in high-throughput and the simultaneous analysis of a wide variety of compounds, and it has been widely used in recent years. The use of MS for detection provides the advantages of high sensitivity and high selectivity. However, the sensitivity of MS detection depends on the chromatographic conditions and MS parameters. Thus, optimization of MS parameters corresponding to the SFC condition is mandatory for maximizing performance when connecting SFC to MS. The aim of this study was to reveal a way to decide the optimum composition of the mobile phase and the flow rate of the make-up solvent for MS detection in a wide range of compounds. Additionally, we also showed the basic concept for determination of the optimum values of the MS parameters focusing on the MS detection sensitivity in SFC/MS analysis. To verify the versatility of these findings, a total of 441 pesticides with a wide polarity range (logP ow from -4.21 to 7.70) and pKa (acidic, neutral and basic). In this study, a new SFC-MS interface was used, which can transfer the entire volume of eluate into the MS by directly coupling the SFC with the MS. This enabled us to compare the sensitivity or optimum MS parameters for MS detection between LC/MS and SFC/MS for the same sample volume introduced into the MS. As a result, it was found that the optimum values of some MS parameters were completely different from those of LC/MS, and that SFC/MS-specific optimization of the analytical conditions is required. Lastly, we evaluated the sensitivity of SFC/MS using fully optimized analytical conditions. As a result, we confirmed that SFC/MS showed much higher sensitivity than LC/MS when the analytical conditions were fully optimized for SFC/MS; and the high sensitivity also increase the number of the compounds that can be detected with good repeatability in real sample analysis. This result indicates that SFC/MS has potential for practical use in the multiresidue analysis of a wide range of compounds that requires high sensitivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Vibrations of a thin cylindrical shell stiffened by rings with various stiffness

    NASA Astrophysics Data System (ADS)

    Nesterchuk, G. A.

    2018-05-01

    The problem of vibrations of a thin-walled elastic cylindrical shell reinforced by frames of different rigidity is investigated. The solution for the case of the clamped shell edges was obtained by asymptotic methods and refined by the finite element method. Rings with zero eccentricity and stiffness varying along the generatrix of the shell cylinder are considered. Varying the optimal coefficients of the distribution functions of the rigidity of the frames and finding more precise parameters makes it possible to find correction factors for analytical formulas of approximate calculation.

  6. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  7. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  8. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  9. The ESSENCE Supernova Survey: Survey Optimization, Observations, and Supernova Photometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miknaitis, Gajus; Pignata, G.; Rest, A.

    We describe the implementation and optimization of the ESSENCE supernova survey, which we have undertaken to measure the equation of state parameter of the dark energy. We present a method for optimizing the survey exposure times and cadence to maximize our sensitivity to the dark energy equation of state parameter w = P/{rho}c{sup 2} for a given fixed amount of telescope time. For our survey on the CTIO 4m telescope, measuring the luminosity distances and redshifts for supernovae at modest redshifts (z {approx} 0.5 {+-} 0.2) is optimal for determining w. We describe the data analysis pipeline based on usingmore » reliable and robust image subtraction to find supernovae automatically and in near real-time. Since making cosmological inferences with supernovae relies crucially on accurate measurement of their brightnesses, we describe our efforts to establish a thorough calibration of the CTIO 4m natural photometric system. In its first four years, ESSENCE has discovered and spectroscopically confirmed 102 type Ia SNe, at redshifts from 0.10 to 0.78, identified through an impartial, effective methodology for spectroscopic classification and redshift determination. We present the resulting light curves for the all type Ia supernovae found by ESSENCE and used in our measurement of w, presented in Wood-Vasey et al. (2007).« less

  10. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  11. Using modified fruit fly optimisation algorithm to perform the function test and case studies

    NASA Astrophysics Data System (ADS)

    Pan, Wen-Tsao

    2013-06-01

    Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.

  12. Simultaneous optimization of micro-heliostat geometry and field layout using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lazardjani, Mani Yousefpour; Kronhardt, Valentina; Dikta, Gerhard; Göttsche, Joachim

    2016-05-01

    A new optimization tool for micro-heliostat (MH) geometry and field layout is presented. The method intends simultaneous performance improvement and cost reduction through iteration of heliostat geometry and field layout parameters. This tool was developed primarily for the optimization of a novel micro-heliostat concept, which was developed at Solar-Institut Jülich (SIJ). However, the underlying approach for the optimization can be used for any heliostat type. During the optimization the performance is calculated using the ray-tracing tool SolCal. The costs of the heliostats are calculated by use of a detailed cost function. A genetic algorithm is used to change heliostat geometry and field layout in an iterative process. Starting from an initial setup, the optimization tool generates several configurations of heliostat geometries and field layouts. For each configuration a cost-performance ratio is calculated. Based on that, the best geometry and field layout can be selected in each optimization step. In order to find the best configuration, this step is repeated until no significant improvement in the results is observed.

  13. Optimization of Gas Metal Arc Welding Process Parameters

    NASA Astrophysics Data System (ADS)

    Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.

    2016-09-01

    This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.

  14. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  15. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  16. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, Maolong; Lu, Dan; Gui, Dongwei

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  17. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  18. Neuromusculoskeletal Model Calibration Significantly Affects Predicted Knee Contact Forces for Walking

    PubMed Central

    Serrancolí, Gil; Kinney, Allison L.; Fregly, Benjamin J.; Font-Llagunes, Josep M.

    2016-01-01

    Though walking impairments are prevalent in society, clinical treatments are often ineffective at restoring lost function. For this reason, researchers have begun to explore the use of patient-specific computational walking models to develop more effective treatments. However, the accuracy with which models can predict internal body forces in muscles and across joints depends on how well relevant model parameter values can be calibrated for the patient. This study investigated how knowledge of internal knee contact forces affects calibration of neuromusculoskeletal model parameter values and subsequent prediction of internal knee contact and leg muscle forces during walking. Model calibration was performed using a novel two-level optimization procedure applied to six normal walking trials from the Fourth Grand Challenge Competition to Predict In Vivo Knee Loads. The outer-level optimization adjusted time-invariant model parameter values to minimize passive muscle forces, reserve actuator moments, and model parameter value changes with (Approach A) and without (Approach B) tracking of experimental knee contact forces. Using the current guess for model parameter values but no knee contact force information, the inner-level optimization predicted time-varying muscle activations that were close to experimental muscle synergy patterns and consistent with the experimental inverse dynamic loads (both approaches). For all the six gait trials, Approach A predicted knee contact forces with high accuracy for both compartments (average correlation coefficient r = 0.99 and root mean square error (RMSE) = 52.6 N medial; average r = 0.95 and RMSE = 56.6 N lateral). In contrast, Approach B overpredicted contact force magnitude for both compartments (average RMSE = 323 N medial and 348 N lateral) and poorly matched contact force shape for the lateral compartment (average r = 0.90 medial and −0.10 lateral). Approach B had statistically higher lateral muscle forces and lateral optimal muscle fiber lengths but lower medial, central, and lateral normalized muscle fiber lengths compared to Approach A. These findings suggest that poorly calibrated model parameter values may be a major factor limiting the ability of neuromusculoskeletal models to predict knee contact and leg muscle forces accurately for walking. PMID:27210105

  19. Evolutionary algorithms for the optimization of advective control of contaminated aquifer zones

    NASA Astrophysics Data System (ADS)

    Bayer, Peter; Finkel, Michael

    2004-06-01

    Simple genetic algorithms (SGAs) and derandomized evolution strategies (DESs) are employed to adapt well capture zones for the hydraulic optimization of pump-and-treat systems. A hypothetical contaminant site in a heterogeneous aquifer serves as an application template. On the basis of the results from numerical flow modeling, particle tracking is applied to delineate the pathways of the contaminants. The objective is to find the minimum pumping rate of up to eight recharge wells within a downgradient well placement area. Both the well coordinates and the pumping rates are subject to optimization, leading to a mixed discrete-continuous problem. This article discusses the ideal formulation of the objective function for which the number of particles and the total pumping rate are used as decision criteria. Boundary updating is introduced, which enables the reorganization of the decision space limits by the incorporation of experience from previous optimization runs. Throughout the study the algorithms' capabilities are evaluated in terms of the number of model runs which are needed to identify optimal and suboptimal solutions. Despite the complexity of the problem both evolutionary algorithm variants prove to be suitable for finding suboptimal solutions. The DES with weighted recombination reveals to be the ideal algorithm to find optimal solutions. Though it works with real-coded decision parameters, it proves to be suitable for adjusting discrete well positions. Principally, the representation of well positions as binary strings in the SGA is ideal. However, even if the SGA takes advantage of bookkeeping, the vital high discretization of pumping rates results in long binary strings, which escalates the model runs that are needed to find an optimal solution. Since the SGA string lengths increase with the number of wells, the DES gains superiority, particularly for an increasing number of wells. As the DES is a self-adaptive algorithm, it proves to be the more robust optimization method for the selected advective control problem than the SGA variants of this study, exhibiting a less stochastic search which is reflected in the minor variability of the found solutions.

  20. Idealized Experiments for Optimizing Model Parameters Using a 4D-Variational Method in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang

    2018-04-01

    Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  1. Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, C.; Zhang, R. H.

    2017-12-01

    Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  2. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  3. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  4. Distance-to-Agreement Investigation of Tomotherapy's Bony Anatomy-Based Autoregistration and Planning Target Volume Contour-Based Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Steve, E-mail: ssuh@coh.org; Schultheiss, Timothy E.

    Purpose: To compare Tomotherapy's megavoltage computed tomography bony anatomy autoregistration with the best achievable registration, assuming no deformation and perfect knowledge of planning target volume (PTV) location. Methods and Materials: Distance-to-agreement (DTA) of the PTV was determined by applying a rigid-body shift to the PTV region of interest of the prostate from its reference position, assuming no deformations. Planning target volume region of interest of the prostate was extracted from the patient archives. The reference position was set by the 6 degrees of freedom (dof)—x, y, z, roll, pitch, and yaw—optimization results from the previous study at this institution. Themore » DTA and the compensating parameters were calculated by the shift of the PTV from the reference 6-dof to the 4-dof—x, y, z, and roll—optimization. In this study, the effectiveness of Tomotherapy's 4-dof bony anatomy–based autoregistration was compared with the idealized 4-dof PTV contour-based optimization. Results: The maximum DTA (maxDTA) of the bony anatomy-based autoregistration was 3.2 ± 1.9 mm, with the maximum value of 8.0 mm. The maxDTA of the contour-based optimization was 1.8 ± 1.3 mm, with the maximum value of 5.7 mm. Comparison of Pearson correlation of the compensating parameters between the 2 4-dof optimization algorithms shows that there is a small but statistically significant correlation in y and z (0.236 and 0.300, respectively), whereas there is very weak correlation in x and roll (0.062 and 0.025, respectively). Conclusions: We find that there is an average improvement of approximately 1 mm in terms of maxDTA on the PTV going from 4-dof bony anatomy-based autoregistration to the 4-dof contour-based optimization. Pearson correlation analysis of the 2 4-dof optimizations suggests that uncertainties due to deformation and inadequate resolution account for much of the compensating parameters, but pitch variation also makes a statistically significant contribution.« less

  5. Periodic orbits of hybrid systems and parameter estimation via AD.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guckenheimer, John.; Phipps, Eric Todd; Casey, Richard

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical modelsmore » of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance between two given periodic orbits which is then minimized using a trust-region minimization algorithm [DS83] to find optimal fits of the model to a reference orbit [Cas04]. There are two different yet related goals that motivate the algorithmic choices listed above. The first is to provide a simple yet powerful framework for studying periodic motions in mechanical systems. Formulating mechanically correct equations of motion for systems of interconnected rigid bodies, while straightforward, is a time-consuming error prone process. Much of this difficulty stems from computing the acceleration of each rigid body in an inertial reference frame. The acceleration is computed most easily in a redundant set of coordinates giving the spatial positions of each body: since the acceleration is just the second derivative of these positions. Rather than providing explicit formulas for these derivatives, automatic differentiation can be employed to compute these quantities efficiently during the course of a simulation. The feasibility of these ideas was investigated by applying these techniques to the problem of locating stable walking motions for a disc-foot passive walking machine [CGMR01, Gar99, McG91]. The second goal for this work was to investigate the application of smooth optimization methods to periodic orbit parameter estimation problems in neural oscillations. Others [BB93, FUS93, VB99] have favored non-continuous optimization methods such as genetic algorithms, stochastic search methods, simulated annealing and brute-force random searches because of their perceived suitability to the landscape of typical objective functions in parameter space, particularly for multi-compartmental neural models. Here we argue that a carefully formulated optimization problem is amenable to Newton-like methods and has a sufficiently smooth landscape in parameter space that these methods can be an efficient and effective alternative. The plan of this paper is as follows. In Section 1 we provide a definition of hybrid systems that is the basis for modeling systems with discontinuities or discrete transitions. Sections 2, 3, and 4 briefly describe the Taylor series integration, periodic orbit tracking, and parameter estimation algorithms. For full treatments of these algorithms, we refer the reader to [Phi03, Cas04, CPG04]. The software implementation of these algorithms is briefly described in Section 5 with particular emphasis on the automatic differentiation software ADMC++. Finally, these algorithms are applied to the bipedal walking and Hodgkin-Huxley based neural oscillation problems discussed above in Section 6.« less

  6. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  7. Hybrid computer optimization of systems with random parameters

    NASA Technical Reports Server (NTRS)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  8. The Hull Method for Selecting the Number of Common Factors

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  9. Toward a preoperative planning tool for brain tumor resection therapies.

    PubMed

    Coffey, Aaron M; Miga, Michael I; Chen, Ishita; Thompson, Reid C

    2013-01-01

    Neurosurgical procedures involving tumor resection require surgical planning such that the surgical path to the tumor is determined to minimize the impact on healthy tissue and brain function. This work demonstrates a predictive tool to aid neurosurgeons in planning tumor resection therapies by finding an optimal model-selected patient orientation that minimizes lateral brain shift in the field of view. Such orientations may facilitate tumor access and removal, possibly reduce the need for retraction, and could minimize the impact of brain shift on image-guided procedures. In this study, preoperative magnetic resonance images were utilized in conjunction with pre- and post-resection laser range scans of the craniotomy and cortical surface to produce patient-specific finite element models of intraoperative shift for 6 cases. These cases were used to calibrate a model (i.e., provide general rules for the application of patient positioning parameters) as well as determine the current model-based framework predictive capabilities. Finally, an objective function is proposed that minimizes shift subject to patient position parameters. Patient positioning parameters were then optimized and compared to our neurosurgeon as a preliminary study. The proposed model-driven brain shift minimization objective function suggests an overall reduction of brain shift by 23 % over experiential methods. This work recasts surgical simulation from a trial-and-error process to one where options are presented to the surgeon arising from an optimization of surgical goals. To our knowledge, this is the first realization of an evaluative tool for surgical planning that attempts to optimize surgical approach by means of shift minimization in this manner.

  10. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  11. Imaging diagnostics in ovarian cancer: magnetic resonance imaging and a scoring system guiding choice of primary treatment.

    PubMed

    Kasper, Sigrid M; Dueholm, Margit; Marinovskij, Edvard; Blaakær, Jan

    2017-03-01

    To analyze the ability of magnetic resonance imaging (MRI) and systematic evaluation at surgery to predict optimal cytoreduction in primary advanced ovarian cancer and to develop a preoperative scoring system for cancer staging. Preoperative MRI and standard laparotomy were performed in 99 women with either ovarian or primary peritoneal cancer. Using univariate and multivariate logistic regression analysis of a systematic description of the tumor in nine abdominal compartments obtained by MRI and during surgery plus clinical parameters, a scoring system was designed that predicted non-optimal cytoreduction. Non-optimal cytoreduction at operation was predicted by the following: (A) presence of comorbidities group 3 or 4 (ASA); (B) tumor presence in multiple numbers of different compartments, and (C) numbers of specified sites of organ involvement. The score includes: number of compartments involved (1-9 points), >1 subdiaphragmal location with presence of tumor (1 point); deep organ involvement of liver (1 point), porta hepatis (1 point), spleen (1 point), mesentery/vessel (1 point), cecum/ileocecal (1 point), rectum/vessels (1 point): ASA groups 3 and 4 (2 points). Use of the scoring system based on operative findings gave an area under the curve (AUC) of 91% (85-98%) for patients in whom optimal cytoreduction could not be achieved. The score AUC obtained by MRI was 84% (76-92%), and 43% of non-optimal cytoreduction patients were identified, with only 8% of potentially operable patients being falsely evaluated as suitable for non-optimal cytoreduction at the most optimal cut-off value. Tumor in individual locations did not predict operability. This systematic scoring system based on operative findings and MRI may predict non-optimal cytoreduction. MRI is able to assess ovarian cancer with peritoneal carcinomatosis with satisfactory concordance with laparotomic findings. This scoring system could be useful as a clinical guideline and should be evaluated and developed further in larger studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  13. Modeling antibiotic treatment in hospitals: A systematic approach shows benefits of combination therapy over cycling, mixing, and mono-drug therapies.

    PubMed

    Tepekule, Burcu; Uecker, Hildegard; Derungs, Isabel; Frenoy, Antoine; Bonhoeffer, Sebastian

    2017-09-01

    Multiple treatment strategies are available for empiric antibiotic therapy in hospitals, but neither clinical studies nor theoretical investigations have yielded a clear picture when which strategy is optimal and why. Extending earlier work of others and us, we present a mathematical model capturing treatment strategies using two drugs, i.e the multi-drug therapies referred to as cycling, mixing, and combination therapy, as well as monotherapy with either drug. We randomly sample a large parameter space to determine the conditions determining success or failure of these strategies. We find that combination therapy tends to outperform the other treatment strategies. By using linear discriminant analysis and particle swarm optimization, we find that the most important parameters determining success or failure of combination therapy relative to the other treatment strategies are the de novo rate of emergence of double resistance in patients infected with sensitive bacteria and the fitness costs associated with double resistance. The rate at which double resistance is imported into the hospital via patients admitted from the outside community has little influence, as all treatment strategies are affected equally. The parameter sets for which combination therapy fails tend to fall into areas with low biological plausibility as they are characterised by very high rates of de novo emergence of resistance to both drugs compared to a single drug, and the cost of double resistance is considerably smaller than the sum of the costs of single resistance.

  14. A New Analysis of the Two Classical ZZ Ceti White Dwarfs GD 165 and Ross 548. II. Seismic Modeling

    NASA Astrophysics Data System (ADS)

    Giammichele, N.; Fontaine, G.; Brassard, P.; Charpinet, S.

    2016-03-01

    We present the second of a two-part seismic analysis of the bright, hot ZZ Ceti stars GD 165 and Ross 548. In this second part, we report the results of detailed searches in parameter space for identifying an optimal model for each star that can account well for the observed periods, while being consistent with the spectroscopic constraints derived in our first paper. We find optimal models for each target that reproduce the six observed periods well within ∼0.3% on the average. We also find that there is a sensitivity on the core composition for Ross 548, while there is practically none for GD 165. Our optimal model of Ross 548, with its thin envelope, indeed shows weight functions for some confined modes that extend relatively deep into the interior, thus explaining the sensitivity of the period spectrum on the core composition in that star. In contrast, our optimal seismic model of its spectroscopic sibling, GD 165 with its thick envelope, does not trap/confine modes very efficiently, and we find weight functions for all six observed modes that do not extend into the deep core, hence accounting for the lack of sensitivity in that case. Furthermore, we exploit after the fact the observed multiplet structure that we ascribe to rotation. We are able to map the rotation profile in GD 165 (Ross 548) over the outermost ∼20% (∼5%) of its radius, and we find that the profile is consistent with solid-body rotation.

  15. Aggregation Pheromone System: A Real-parameter Optimization Algorithm using Aggregation Pheromones as the Base Metaphor

    NASA Astrophysics Data System (ADS)

    Tsutsui, Shigeyosi

    This paper proposes an aggregation pheromone system (APS) for solving real-parameter optimization problems using the collective behavior of individuals which communicate using aggregation pheromones. APS was tested on several test functions used in evolutionary computation. The results showed APS could solve real-parameter optimization problems fairly well. The sensitivity analysis of control parameters of APS is also studied.

  16. An assessment of separable fluid connector system parameters to perform a connector system design optimization study

    NASA Technical Reports Server (NTRS)

    Prasthofer, W. P.

    1974-01-01

    The key to optimization of design where there are a large number of variables, all of which may not be known precisely, lies in the mathematical tool of dynamic programming developed by Bellman. This methodology can lead to optimized solutions to the design of critical systems in a minimum amount of time, even when there are a great number of acceptable configurations to be considered. To demonstrate the usefulness of dynamic programming, an analytical method is developed for evaluating the relationship among existing numerous connector designs to find the optimum configuration. The data utilized in the study were generated from 900 flanges designed for six subsystems of the S-1B stage of the Saturn 1B space carrier vehicle.

  17. Optimal adaptive control for quantum metrology with time-dependent Hamiltonians.

    PubMed

    Pang, Shengshi; Jordan, Andrew N

    2017-03-09

    Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T 2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T 4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case.

  18. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  19. An improved grey wolf optimizer algorithm for the inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui

    2018-05-01

    The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.

  20. Lunar Habitat Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.

  1. Optimal adaptive control for quantum metrology with time-dependent Hamiltonians

    PubMed Central

    Pang, Shengshi; Jordan, Andrew N.

    2017-01-01

    Quantum metrology has been studied for a wide range of systems with time-independent Hamiltonians. For systems with time-dependent Hamiltonians, however, due to the complexity of dynamics, little has been known about quantum metrology. Here we investigate quantum metrology with time-dependent Hamiltonians to bridge this gap. We obtain the optimal quantum Fisher information for parameters in time-dependent Hamiltonians, and show proper Hamiltonian control is generally necessary to optimize the Fisher information. We derive the optimal Hamiltonian control, which is generally adaptive, and the measurement scheme to attain the optimal Fisher information. In a minimal example of a qubit in a rotating magnetic field, we find a surprising result that the fundamental limit of T2 time scaling of quantum Fisher information can be broken with time-dependent Hamiltonians, which reaches T4 in estimating the rotation frequency of the field. We conclude by considering level crossings in the derivatives of the Hamiltonians, and point out additional control is necessary for that case. PMID:28276428

  2. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.

  3. Analytical investigation of third grade nanofluidic flow over a riga plate using Cattaneo-Christov model

    NASA Astrophysics Data System (ADS)

    Naseem, Anum; Shafiq, Anum; Zhao, Lifeng; Farooq, M. U.

    2018-06-01

    This article addresses third grade nanofluidic flow instigated by riga plate and Cattaneo-Christov theory is adopted to investigate thermal and mass diffusions with the incorporation of newly eminent zero nanoparticles mass flux condition. The governing system of equations is nondimensionalized through relevant similarity transformations and significatory findings are attained by using optimal homotopy analysis method. The behaviors of affecting parameters for velocity, temperature and concentration profiles are depicted graphically and also verified through three dimensional patterns for some parameters. Values of skin friction coefficient and Nusselt number with the apposite discussion have been recorded. The current results reveal that temperature and concentration profiles experience decline when thermal and concentration relaxation parameters are augmented respectively.

  4. Entangling measurements for multiparameter estimation with two qubits

    NASA Astrophysics Data System (ADS)

    Roccia, Emanuele; Gianani, Ilaria; Mancino, Luca; Sbroscia, Marco; Somma, Fabrizia; Genoni, Marco G.; Barbieri, Marco

    2018-01-01

    Careful tailoring the quantum state of probes offers the capability of investigating matter at unprecedented precisions. Rarely, however, the interaction with the sample is fully encompassed by a single parameter, and the information contained in the probe needs to be partitioned on multiple parameters. There exist, then, practical bounds on the ultimate joint-estimation precision set by the unavailability of a single optimal measurement for all parameters. Here, we discuss how these considerations are modified for two-level quantum probes — qubits — by the use of two copies and entangling measurements. We find that the joint estimation of phase and phase diffusion benefits from such collective measurement, while for multiple phases no enhancement can be observed. We demonstrate this in a proof-of-principle photonics setup.

  5. Transcranial Electrical Neuromodulation Based on the Reciprocity Principle

    PubMed Central

    Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don

    2016-01-01

    A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints. PMID:27303311

  6. Transcranial Electrical Neuromodulation Based on the Reciprocity Principle.

    PubMed

    Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don

    2016-01-01

    A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.

  7. Hybrid protection algorithms based on game theory in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming

    2011-12-01

    With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.

  8. Optimization of the Neutrino Factory, revisited

    NASA Astrophysics Data System (ADS)

    Agarwalla, Sanjib K.; Huber, Patrick; Tang, Jian; Winter, Walter

    2011-01-01

    We perform the baseline and energy optimization of the Neutrino Factory including the latest simulation results on the magnetized iron detector (MIND). We also consider the impact of τ decays, generated by νμ → ντ or ν e → ντ appearance, on the mass hierarchy, CP violation, and θ 13 discovery reaches, which we find to be negligible for the considered detector. For the baseline-energy optimization for small sin2 2 θ 13, we qualitatively recover the results with earlier simulations of the MIND detector. We find optimal baselines of about 2500km to 5000km for the CP violation measurement, where now values of E μ as low as about 12 GeV may be possible. However, for large sin2 2 θ 13, we demonstrate that the lower threshold and the backgrounds reconstructed at lower energies allow in fact for muon energies as low as 5 GeV at considerably shorter baselines, such as FNAL-Homestake. This implies that with the latest MIND analysis, low-and high-energy versions of the Neutrino Factory are just two different versions of the same experiment optimized for different parts of the parameter space. Apart from a green-field study of the updated detector performance, we discuss specific implementations for the two-baseline Neutrino Factory, where the considered detector sites are taken to be currently discussed underground laboratories. We find that reasonable setups can be found for the Neutrino Factory source in Asia, Europe, and North America, and that a triangular-shaped storage ring is possible in all cases based on geometrical arguments only.

  9. Noise in solid-state nanopores

    PubMed Central

    Smeets, R. M. M.; Keyser, U. F.; Dekker, N. H.; Dekker, C.

    2008-01-01

    We study ionic current fluctuations in solid-state nanopores over a wide frequency range and present a complete description of the noise characteristics. At low frequencies (f ≲ 100 Hz) we observe 1/f-type of noise. We analyze this low-frequency noise at different salt concentrations and find that the noise power remarkably scales linearly with the inverse number of charge carriers, in agreement with Hooge's relation. We find a Hooge parameter α = (1.1 ± 0.1) × 10−4. In the high-frequency regime (f ≳ 1 kHz), we can model the increase in current power spectral density with frequency through a calculation of the Johnson noise. Finally, we use these results to compute the signal-to-noise ratio for DNA translocation for different salt concentrations and nanopore diameters, yielding the parameters for optimal detection efficiency. PMID:18184817

  10. Noise in solid-state nanopores.

    PubMed

    Smeets, R M M; Keyser, U F; Dekker, N H; Dekker, C

    2008-01-15

    We study ionic current fluctuations in solid-state nanopores over a wide frequency range and present a complete description of the noise characteristics. At low frequencies (f approximately < 100 Hz) we observe 1/f-type of noise. We analyze this low-frequency noise at different salt concentrations and find that the noise power remarkably scales linearly with the inverse number of charge carriers, in agreement with Hooge's relation. We find a Hooge parameter alpha = (1.1 +/- 0.1) x 10(-4). In the high-frequency regime (f approximately > 1 kHz), we can model the increase in current power spectral density with frequency through a calculation of the Johnson noise. Finally, we use these results to compute the signal-to-noise ratio for DNA translocation for different salt concentrations and nanopore diameters, yielding the parameters for optimal detection efficiency.

  11. Continuous spatial public goods game with self and peer punishment based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Quan, Ji; Yang, Xiukang; Wang, Xianjia

    2018-07-01

    How cooperative behavior emerges and evolves in human society remains a puzzle. It has been observed that the sense of guilt rooted from free-riding and the sense of justice for punishing the free-riders are prevalent in the real world. Inspired by this observation, two punishment mechanisms have been introduced in the spatial public goods game which are called self-punishment and peer punishment respectively in this paper. In each situation, we have introduced a corresponding parameter to describe the level of individual tolerance or social tolerance. For each individual, whether to punish others or whether it will be punished by others depends on the corresponding tolerance parameter. We focus on the effects of the two kinds of tolerance parameters on the cooperation of the population. The particle swarm optimization (PSO)-based learning rule is used to describe the strategy updating process of individuals. We consider both of the memory and the imitation in our model. Via simulation experiments, we find that both of the two punishment mechanisms could facilitate the promotion of cooperation to a large extent. For the self-punishment and for most parameters in the peer punishment, the smaller the tolerance parameter, the more conducive it is to promote cooperation. These results can help us to better understand the prevailing phenomenon of cooperation in the real world.

  12. Evaluation of the pre-posterior distribution of optimized sampling times for the design of pharmacokinetic studies.

    PubMed

    Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John

    2012-01-01

    Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.

  13. Optimally oriented grooves on dental implants improve bone quality around implants under repetitive mechanical loading.

    PubMed

    Kuroshima, Shinichiro; Nakano, Takayoshi; Ishimoto, Takuya; Sasaki, Muneteru; Inoue, Maaya; Yasutake, Munenori; Sawase, Takashi

    2017-01-15

    The aim was to investigate the effect of groove designs on bone quality under controlled-repetitive load conditions for optimizing dental implant design. Anodized Ti-6Al-4V alloy implants with -60° and +60° grooves around the neck were placed in the proximal tibial metaphysis of rabbits. The application of a repetitive mechanical load was initiated via the implants (50N, 3Hz, 1800 cycles, 2days/week) at 12weeks after surgery for 8weeks. Bone quality, defined as osteocyte density and degree of biological apatite (BAp) c-axis/collagen fibers, was then evaluated. Groove designs did not affect bone quality without mechanical loading; however, repetitive mechanical loading significantly increased bone-to-implant contact, bone mass, and bone mineral density (BMD). In +60° grooves, the BAp c-axis/collagen fibers preferentially aligned along the groove direction with mechanical loading. Moreover, osteocyte density was significantly higher both inside and in the adjacent region of the +60° grooves, but not -60° grooves. These results suggest that the +60° grooves successfully transmitted the load to the bone tissues surrounding implants through the grooves. An optimally oriented groove structure on the implant surface was shown to be a promising way for achieving bone tissue with appropriate bone quality. This is the first report to propose the optimal design of grooves on the necks of dental implants for improving bone quality parameters as well as BMD. The findings suggest that not only BMD, but also bone quality, could be a useful clinical parameter in implant dentistry. Although the paradigm of bone quality has shifted from density-based assessments to structural evaluations of bone, clarifying bone quality based on structural bone evaluations remains challenging in implant dentistry. In this study, we firstly demonstrated that the optimal design of dental implant necks improved bone quality defined as osteocytes and the preferential alignment degree of biological apatite c-axis/collagen fibers using light microscopy, polarized light microscopy, and a microbeam X-ray diffractometer system, after application of controlled mechanical load. Our new findings suggest that bone quality around dental implants could become a new clinical parameter as well as bone mineral density in order to completely account for bone strength in implant dentistry. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  14. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  15. Impact factors and the optimal parameter of acoustic structure quantification in the assessment of liver fibrosis.

    PubMed

    Huang, Yang; Liu, Guang-Jian; Liao, Bing; Huang, Guang-Liang; Liang, Jin-Yu; Zhou, Lu-Yao; Wang, Fen; Li, Wei; Xie, Xiao-Yan; Wang, Wei; Lu, Ming-De

    2015-09-01

    The aims of the present study are to assess the impact factors on acoustic structure quantification (ASQ) ultrasound and find the optimal parameter for the assessment of liver fibrosis. Twenty healthy volunteers underwent ASQ examinations to evaluate impact factors in ASQ image acquisition and analysis. An additional 113 patients with liver diseases underwent standardized ASQ examinations, and the results were compared with histologic staging of liver fibrosis. We found that the right liver displayed lower values of ASQ parameters than the left (p = 0.000-0.021). Receive gain experienced no significant impact except gain 70 (p = 0.193-1.000). With regard to different diameter of involved vessels in regions of interest, the group ≤2.0 mm differed significantly with the group 2.1-5.0 mm (p = 0.000-0.033) and the group >5.0 mm (p = 0.000-0.062). However, the region of interest size (p = 0.438-1.000) and depth (p = 0.072-0.764) had no statistical impact. Good intra- and inter-operator reproducibilities were found in both image acquisitions and offline image analyses. In the liver fibrosis study, the focal disturbance ratio had the highest correlation with histologic fibrosis stage (r = 0.67, p < 0.001). In conclusion, the testing position, receive gain and involved vessels were the main factors in ASQ examinations and focal disturbance ratio was the optimal parameter in the assessment of liver fibrosis. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  16. Current and efficiency of Brownian particles under oscillating forces in entropic barriers

    NASA Astrophysics Data System (ADS)

    Nutku, Ferhat; Aydιner, Ekrem

    2015-04-01

    In this study, considering the temporarily unbiased force and different forms of oscillating forces, we investigate the current and efficiency of Brownian particles in an entropic tube structure and present the numerically obtained results. We show that different force forms give rise to different current and efficiency profiles in different optimized parameter intervals. We find that an unbiased oscillating force and an unbiased temporal force lead to the current and efficiency, which are dependent on these parameters. We also observe that the current and efficiency caused by temporal and different oscillating forces have maximum and minimum values in different parameter intervals. We conclude that the current or efficiency can be controlled dynamically by adjusting the parameters of entropic barriers and applied force. Project supported by the Funds from Istanbul University (Grant No. 45662).

  17. Performance Analysis of Hybrid Electric Vehicle over Different Driving Cycles

    NASA Astrophysics Data System (ADS)

    Panday, Aishwarya; Bansal, Hari Om

    2017-02-01

    Article aims to find the nature and response of a hybrid vehicle on various standard driving cycles. Road profile parameters play an important role in determining the fuel efficiency. Typical parameters of road profile can be reduced to a useful smaller set using principal component analysis and independent component analysis. Resultant data set obtained after size reduction may result in more appropriate and important parameter cluster. With reduced parameter set fuel economies over various driving cycles, are ranked using TOPSIS and VIKOR multi-criteria decision making methods. The ranking trend is then compared with the fuel economies achieved after driving the vehicle over respective roads. Control strategy responsible for power split is optimized using genetic algorithm. 1RC battery model and modified SOC estimation method are considered for the simulation and improved results compared with the default are obtained.

  18. Exact results for the finite time thermodynamic uncertainty relation

    NASA Astrophysics Data System (ADS)

    Manikandan, Sreekanth K.; Krishnamurthy, Supriya

    2018-03-01

    We obtain exact results for the recently discovered finite-time thermodynamic uncertainty relation, for the dissipated work W d , in a stochastically driven system with non-Gaussian work statistics, both in the steady state and transient regimes, by obtaining exact expressions for any moment of W d at arbitrary times. The uncertainty function (the Fano factor of W d ) is bounded from below by 2k_BT as expected, for all times τ, in both steady state and transient regimes. The lower bound is reached at τ=0 as well as when certain system parameters vanish (corresponding to an equilibrium state). Surprisingly, we find that the uncertainty function also reaches a constant value at large τ for all the cases we have looked at. For a system starting and remaining in steady state, the uncertainty function increases monotonically, as a function of τ as well as other system parameters, implying that the large τ value is also an upper bound. For the same system in the transient regime, however, we find that the uncertainty function can have a local minimum at an accessible time τm , for a range of parameter values. The large τ value for the uncertainty function is hence not a bound in this case. The non-monotonicity suggests, rather counter-intuitively, that there might be an optimal time for the working of microscopic machines, as well as an optimal configuration in the phase space of parameter values. Our solutions show that the ratios of higher moments of the dissipated work are also bounded from below by 2k_BT . For another model, also solvable by our methods, which never reaches a steady state, the uncertainty function, is in some cases, bounded from below by a value less than 2k_BT .

  19. An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood

    NASA Astrophysics Data System (ADS)

    Dinh, Khanh N.; Sidje, Roger B.

    2017-12-01

    Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.

  20. Finding-specific display presets for computed radiography soft-copy reading.

    PubMed

    Andriole, K P; Gould, R G; Webb, W R

    1999-05-01

    Much work has been done to optimize the display of cross-sectional modality imaging examinations for soft-copy reading (i.e., window/level tissue presets, and format presentations such as tile and stack modes, four-on-one, nine-on-one, etc). Less attention has been paid to the display of digital forms of the conventional projection x-ray. The purpose of this study is to assess the utility of providing presets for computed radiography (CR) soft-copy display, based not on the window/level settings, but on processing applied to the image optimized for visualization of specific findings, pathologies, etc (i.e., pneumothorax, tumor, tube location). It is felt that digital display of CR images based on finding-specific processing presets has the potential to: speed reading of digital projection x-ray examinations on soft copy; improve diagnostic efficacy; standardize display across examination type, clinical scenario, important key findings, and significant negatives; facilitate image comparison; and improve confidence in and acceptance of soft-copy reading. Clinical chest images are acquired using an Agfa-Gevaert (Mortsel, Belgium) ADC 70 CR scanner and Fuji (Stamford, CT) 9000 and AC2 CR scanners. Those demonstrating pertinent findings are transferred over the clinical picture archiving and communications system (PACS) network to a research image processing station (Agfa PS5000), where the optimal image-processing settings per finding, pathologic category, etc, are developed in conjunction with a thoracic radiologist, by manipulating the multiscale image contrast amplification (Agfa MUSICA) algorithm parameters. Soft-copy display of images processed with finding-specific settings are compared with the standard default image presentation for 50 cases of each category. Comparison is scored using a 5-point scale with the positive scale denoting the standard presentation is preferred over the finding-specific processing, the negative scale denoting the finding-specific processing is preferred over the standard presentation, and zero denoting no difference. Processing settings have been developed for several findings including pneumothorax and lung nodules, and clinical cases are currently being collected in preparation for formal clinical trials. Preliminary results indicate a preference for the optimized-processing presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  1. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  2. Synthesis, optimization, and characterization of molecularly imprinted nanoparticles

    NASA Astrophysics Data System (ADS)

    Rostamizadeh, Kobra; Abdollahi, Hamid; Parsajoo, Cobra

    2013-04-01

    Nanoparticles of molecularly imprinted polymers (MIPs) were prepared by precipitation polymerization method. Glucose was used as a template molecule. The impact of different process parameters on the preparation of nanoparticles was investigated in order to reach the maximum binding capacity of MIPs. Experimental data based on uniform design were analyzed using artificial neural network to find the optimal condition. The results showed that the binding ability of nanoparticles of MIPs prepared under optimum condition was much higher than that of the corresponding non-imprinted nanoparticles (NIPs). The findings also demonstrated high glucose selectivity of imprinted nanoparticles. The results exhibited that the particle size for MIP nanoparticles was about 557.6 nm, and the Brunauer-Emmett-Teller analysis also confirmed that the particle pores were mesopores and macropores around 40 nm and possessed higher volume, surface area, and uniform size compared to the corresponding NIPs.

  3. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  4. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  5. Multiresponse Optimization of Process Parameters in Turning of GFRP Using TOPSIS Method

    PubMed Central

    Parida, Arun Kumar; Routara, Bharat Chandra

    2014-01-01

    Taguchi's design of experiment is utilized to optimize the process parameters in turning operation with dry environment. Three parameters, cutting speed (v), feed (f), and depth of cut (d), with three different levels are taken for the responses like material removal rate (MRR) and surface roughness (R a). The machining is conducted with Taguchi L9 orthogonal array, and based on the S/N analysis, the optimal process parameters for surface roughness and MRR are calculated separately. Considering the larger-the-better approach, optimal process parameters for material removal rate are cutting speed at level 3, feed at level 2, and depth of cut at level 3, that is, v 3-f 2-d 3. Similarly for surface roughness, considering smaller-the-better approach, the optimal process parameters are cutting speed at level 1, feed at level 1, and depth of cut at level 3, that is, v 1-f 1-d 3. Results of the main effects plot indicate that depth of cut is the most influencing parameter for MRR but cutting speed is the most influencing parameter for surface roughness and feed is found to be the least influencing parameter for both the responses. The confirmation test is conducted for both MRR and surface roughness separately. Finally, an attempt has been made to optimize the multiresponses using technique for order preference by similarity to ideal solution (TOPSIS) with Taguchi approach. PMID:27437503

  6. OTIS 3.2 Software Released

    NASA Technical Reports Server (NTRS)

    Riehl, John P.; Sjauw, Waldy K.

    2004-01-01

    Trajectory, mission, and vehicle engineers concern themselves with finding the best way for an object to get from one place to another. These engineers rely upon special software to assist them in this. For a number of years, many engineers have used the OTIS program for this assistance. With OTIS, an engineer can fully optimize trajectories for airplanes, launch vehicles like the space shuttle, interplanetary spacecraft, and orbital transfer vehicles. OTIS provides four modes of operation, with each mode providing successively stronger optimization capability. The most powerful mode uses a mathematical method called implicit integration to solve what engineers and mathematicians call the optimal control problem. OTIS 3.2, which was developed at the NASA Glenn Research Center, is the latest release of this industry workhorse and features new capabilities for parameter optimization and mission design. OTIS stands for Optimal Control by Implicit Simulation, and it is implicit integration that makes OTIS so powerful at solving trajectory optimization problems. Why is this so important? The optimization process not only determines how to get from point A to point B, but it can also determine how to do this with the least amount of propellant, with the lightest starting weight, or in the fastest time possible while avoiding certain obstacles along the way. There are numerous conditions that engineers can use to define optimal, or best. OTIS provides a framework for defining the starting and ending points of the trajectory (point A and point B), the constraints on the trajectory (requirements like "avoid these regions where obstacles occur"), and what is being optimized (e.g., minimize propellant). The implicit integration method can find solutions to very complicated problems when there is not a lot of information available about what the optimal trajectory might be. The method was first developed for solving two-point boundary value problems and was adapted for use in OTIS. Implicit integration usually allows OTIS to find solutions to problems much faster than programs that use explicit integration and parametric methods. Consequently, OTIS is best suited to solving very complicated and highly constrained problems.

  7. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  8. Investigation of statistical iterative reconstruction for dedicated breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makeev, Andrey; Glick, Stephen J.

    2013-08-15

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images weremore » compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue.Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters.Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose.Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose.« less

  9. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  10. The Design of Large Geothermally Powered Air-Conditioning Systems Using an Optimal Control Approach

    NASA Astrophysics Data System (ADS)

    Horowitz, F. G.; O'Bryan, L.

    2010-12-01

    The direct use of geothermal energy from Hot Sedimentary Aquifer (HSA) systems for large scale air-conditioning projects involves many tradeoffs. Aspects contributing towards making design decisions for such systems include: the inadequately known permeability and thermal distributions underground; the combinatorial complexity of selecting pumping and chiller systems to match the underground conditions to the air-conditioning requirements; the future price variations of the electricity market; any uncertainties in future Carbon pricing; and the applicable discount rate for evaluating the financial worth of the project. Expanding upon the previous work of Horowitz and Hornby (2007), we take an optimal control approach to the design of such systems. By building a model of the HSA system, the drilling process, the pumping process, and the chilling operations, along with a specified objective function, we can write a Hamiltonian for the system. Using the standard techniques of optimal control, we use gradients of the Hamiltonian to find the optimal design for any given set of permeabilities, thermal distributions, and the other engineering and financial parameters. By using this approach, optimal system designs could potentially evolve in response to the actual conditions encountered during drilling. Because the granularity of some current models is so coarse, we will be able to compare our optimal control approach to an exhaustive search of parameter space. We will present examples from the conditions appropriate for the Perth Basin of Western Australia, where the WA Geothermal Centre of Excellence is involved with two large air-conditioning projects using geothermal water from deep aquifers at 75 to 95 degrees C.

  11. An application of PSO algorithm for multi-criteria geometry optimization of printed low-pass filters based on conductive periodic structures

    NASA Astrophysics Data System (ADS)

    Steckiewicz, Adam; Butrylo, Boguslaw

    2017-08-01

    In this paper we discussed the results of a multi-criteria optimization scheme as well as numerical calculations of periodic conductive structures with selected geometry. Thin printed structures embedded on a flexible dielectric substrate may be applied as simple, cheap, passive low-pass filters with an adjustable cutoff frequency in low (up to 1 MHz) radio frequency range. The analysis of an electromagnetic phenomena in presented structures was realized on the basis of a three-dimensional numerical model of three proposed geometries of periodic elements. The finite element method (FEM) was used to obtain a solution of an electromagnetic harmonic field. Equivalent lumped electrical parameters of printed cells obtained in such manner determine the shape of an amplitude transmission characteristic of a low-pass filter. A nonlinear influence of a printed cell geometry on equivalent parameters of cells electric model, makes it difficult to find the desired optimal solution. Therefore an optimization problem of optimal cell geometry estimation with regard to an approximation of the determined amplitude transmission characteristic with an adjusted cutoff frequency, was obtained by the particle swarm optimization (PSO) algorithm. A dynamically suitable inertia factor was also introduced into the algorithm to improve a convergence to a global extremity of a multimodal objective function. Numerical results as well as PSO simulation results were characterized in terms of approximation accuracy of predefined amplitude characteristics in a pass-band, stop-band and cutoff frequency. Three geometries of varying degrees of complexity were considered and their use in signal processing systems was evaluated.

  12. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  13. Using genetic algorithms to achieve an automatic and global optimization of analogue methods for statistical downscaling of precipitation

    NASA Astrophysics Data System (ADS)

    Horton, Pascal; Weingartner, Rolf; Obled, Charles; Jaboyedoff, Michel

    2017-04-01

    Analogue methods (AMs) rely on the hypothesis that similar situations, in terms of atmospheric circulation, are likely to result in similar local or regional weather conditions. These methods consist of sampling a certain number of past situations, based on different synoptic-scale meteorological variables (predictors), in order to construct a probabilistic prediction for a local weather variable of interest (predictand). They are often used for daily precipitation prediction, either in the context of real-time forecasting, reconstruction of past weather conditions, or future climate impact studies. The relationship between predictors and predictands is defined by several parameters (predictor variable, spatial and temporal windows used for the comparison, analogy criteria, and number of analogues), which are often calibrated by means of a semi-automatic sequential procedure that has strong limitations. AMs may include several subsampling levels (e.g. first sorting a set of analogs in terms of circulation, then restricting to those with similar moisture status). The parameter space of the AMs can be very complex, with substantial co-dependencies between the parameters. Thus, global optimization techniques are likely to be necessary for calibrating most AM variants, as they can optimize all parameters of all analogy levels simultaneously. Genetic algorithms (GAs) were found to be successful in finding optimal values of AM parameters. They allow taking into account parameters inter-dependencies, and selecting objectively some parameters that were manually selected beforehand (such as the pressure levels and the temporal windows of the predictor variables), and thus obviate the need of assessing a high number of combinations. The performance scores of the optimized methods increased compared to reference methods, and this even to a greater extent for days with high precipitation totals. The resulting parameters were found to be relevant and spatially coherent. Moreover, they were obtained automatically and objectively, which reduces efforts invested in exploration attempts when adapting the method to a new region or for a new predictand. In addition, the approach allowed for new degrees of freedom, such as a weighting between the pressure levels, and non overlapping spatial windows. Genetic algorithms were then used further in order to automatically select predictor variables and analogy criteria. This resulted in interesting outputs, providing new predictor-criterion combinations. However, some limitations of the approach were encountered, and the need of the expert input is likely to remain necessary. Nevertheless, letting GAs exploring a dataset for the best predictor for a predictand of interest is certainly a useful tool, particularly when applied for a new predictand or a new region under different climatic characteristics.

  14. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

    PubMed Central

    2017-01-01

    In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718

  15. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Development of a Composite Tailoring Procedure for Airplane Wings

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi

    2000-01-01

    The quest for finding optimum solutions to engineering problems has existed for a long time. In modern times, the development of optimization as a branch of applied mathematics is regarded to have originated in the works of Newton, Bernoulli and Euler. Venkayya has presented a historical perspective on optimization in [1]. The term 'optimization' is defined by Ashley [2] as a procedure "...which attempts to choose the variables in a design process so as formally to achieve the best value of some performance index while not violating any of the associated conditions or constraints". Ashley presented an extensive review of practical applications of optimization in the aeronautical field till about 1980 [2]. It was noted that there existed an enormous amount of published literature in the field of optimization, but its practical applications in industry were very limited. Over the past 15 years, though, optimization has been widely applied to address practical problems in aerospace design [3-5]. The design of high performance aerospace systems is a complex task. It involves the integration of several disciplines such as aerodynamics, structural analysis, dynamics, and aeroelasticity. The problem involves multiple objectives and constraints pertaining to the design criteria associated with each of these disciplines. Many important trade-offs exist between the parameters involved which are used to define the different disciplines. Therefore, the development of multidisciplinary design optimization (MDO) techniques, in which different disciplines and design parameters are coupled into a closed loop numerical procedure, seems appropriate to address such a complex problem. The importance of MDO in successful design of aerospace systems has been long recognized. Recent developments in this field have been surveyed by Sobieszczanski-Sobieski and Haftka [6].

  17. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  18. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    PubMed

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  19. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  20. Optimizing Protein-Protein van der Waals Interactions for the AMBER ff9x/ff12 Force Field.

    PubMed

    Chapman, Dail E; Steck, Jonathan K; Nerenberg, Paul S

    2014-01-14

    The quality of molecular dynamics (MD) simulations relies heavily on the accuracy of the underlying force field. In recent years, considerable effort has been put into developing more accurate dihedral angle potentials for MD force fields, but relatively little work has focused on the nonbonded parameters, many of which are two decades old. In this work, we assess the accuracy of protein-protein van der Waals interactions in the AMBER ff9x/ff12 force field. Across a test set of 44 neat organic liquids containing the moieties present in proteins, we find root-mean-square (RMS) errors of 1.26 kcal/mol in enthalpy of vaporization and 0.36 g/cm(3) in liquid densities. We then optimize the van der Waals radii and well depths for all of the relevant atom types using these observables, which lowers the RMS errors in enthalpy of vaporization and liquid density of our validation set to 0.59 kcal/mol (53% reduction) and 0.019 g/cm(3) (46% reduction), respectively. Limitations in our parameter optimization were evident for certain atom types, however, and we discuss the implications of these observations for future force field development.

  1. Performance analysis of complex repairable industrial systems using PSO and fuzzy confidence interval based methodology.

    PubMed

    Garg, Harish

    2013-03-01

    The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  2. An optimized method for measuring fatty acids and cholesterol in stable isotope-labeled cells

    PubMed Central

    Argus, Joseph P.; Yu, Amy K.; Wang, Eric S.; Williams, Kevin J.; Bensinger, Steven J.

    2017-01-01

    Stable isotope labeling has become an important methodology for determining lipid metabolic parameters of normal and neoplastic cells. Conventional methods for fatty acid and cholesterol analysis have one or more issues that limit their utility for in vitro stable isotope-labeling studies. To address this, we developed a method optimized for measuring both fatty acids and cholesterol from small numbers of stable isotope-labeled cultured cells. We demonstrate quantitative derivatization and extraction of fatty acids from a wide range of lipid classes using this approach. Importantly, cholesterol is also recovered, albeit at a modestly lower yield, affording the opportunity to quantitate both cholesterol and fatty acids from the same sample. Although we find that background contamination can interfere with quantitation of certain fatty acids in low amounts of starting material, our data indicate that this optimized method can be used to accurately measure mass isotopomer distributions for cholesterol and many fatty acids isolated from small numbers of cultured cells. Application of this method will facilitate acquisition of lipid parameters required for quantifying flux and provide a better understanding of how lipid metabolism influences cellular function. PMID:27974366

  3. Optimal apparent damping as a function of the bandwidth of an array of vibration absorbers.

    PubMed

    Vignola, Joseph; Glean, Aldo; Judge, John; Ryan, Teresa

    2013-08-01

    The transient response of a resonant structure can be altered by the attachment of one or more substantially smaller resonators. Considered here is a coupled array of damped harmonic oscillators whose resonant frequencies are distributed across a frequency band that encompasses the natural frequency of the primary structure. Vibration energy introduced to the primary structure, which has little to no intrinsic damping, is transferred into and trapped by the attached array. It is shown that, when the properties of the array are optimized to reduce the settling time of the primary structure's transient response, the apparent damping is approximately proportional to the bandwidth of the array (the span of resonant frequencies of the attached oscillators). Numerical simulations were conducted using an unconstrained nonlinear minimization algorithm to find system parameters that result in the fastest settling time. This minimization was conducted for a range of system characteristics including the overall bandwidth of the array, the ratio of the total array mass to that of the primary structure, and the distributions of mass, stiffness, and damping among the array elements. This paper reports optimal values of these parameters and demonstrates that the resulting minimum settling time decreases with increasing bandwidth.

  4. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  5. Development of a pump-turbine runner based on multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Xuhe, W.; Baoshan, Z.; Lei, T.; Jie, Z.; Shuliang, C.

    2014-03-01

    As a key component of reversible pump-turbine unit, pump-turbine runner rotates at pump or turbine direction according to the demand of power grid, so higher efficiencies under both operating modes have great importance for energy saving. In the present paper, a multiobjective optimization design strategy, which includes 3D inverse design method, CFD calculations, response surface method (RSM) and multiobjective genetic algorithm (MOGA), is introduced to develop a model pump-turbine runner for middle-high head pumped storage plant. Parameters that controlling blade shape, such as blade loading and blade lean angle at high pressure side are chosen as input parameters, while runner efficiencies under both pump and turbine modes are selected as objective functions. In order to validate the availability of the optimization design system, one runner configuration from Pareto front is manufactured for experimental research. Test results show that the highest unit efficiency is 91.0% under turbine mode and 90.8% under pump mode for the designed runner, of which prototype efficiencies are 93.88% and 93.27% respectively. Viscous CFD calculations for full passage model are also conducted, which aim at finding out the hydraulic improvement from internal flow analyses.

  6. Transcranial Direct Current Stimulation in Stroke Rehabilitation: A Review of Recent Advancements

    PubMed Central

    Gomez Palacio Schjetnan, Andrea; Faraji, Jamshid; Metz, Gerlinde A.; Tatsuno, Masami; Luczak, Artur

    2013-01-01

    Transcranial direct current stimulation (tDCS) is a promising technique to treat a wide range of neurological conditions including stroke. The pathological processes following stroke may provide an exemplary system to investigate how tDCS promotes neuronal plasticity and functional recovery. Changes in synaptic function after stroke, such as reduced excitability, formation of aberrant connections, and deregulated plastic modifications, have been postulated to impede recovery from stroke. However, if tDCS could counteract these negative changes by influencing the system's neurophysiology, it would contribute to the formation of functionally meaningful connections and the maintenance of existing pathways. This paper is aimed at providing a review of underlying mechanisms of tDCS and its application to stroke. In addition, to maximize the effectiveness of tDCS in stroke rehabilitation, future research needs to determine the optimal stimulation protocols and parameters. We discuss how stimulation parameters could be optimized based on electrophysiological activity. In particular, we propose that cortical synchrony may represent a biomarker of tDCS efficacy to indicate communication between affected areas. Understanding the mechanisms by which tDCS affects the neural substrate after stroke and finding ways to optimize tDCS for each patient are key to effective rehabilitation approaches. PMID:23533955

  7. Optimized linear motor and digital PID controller setup used in Mössbauer spectrometer

    NASA Astrophysics Data System (ADS)

    Kohout, Pavel; Kouřil, Lukáš; Navařík, Jakub; Novák, Petr; Pechoušek, Jiří

    2014-10-01

    Optimization of a linear motor and digital PID controller setup used in a Mössbauer spectrometer is presented. Velocity driving system with a digital PID feedback subsystem was developed in the LabVIEW graphical environment and deployed on the sbRIO real-time hardware device (National Instruments). The most important data acquisition processes are performed as real-time deterministic tasks on an FPGA chip. Velocity transducer of a double loudspeaker type with a power amplifier circuit is driven by the system. Series of calibration measurements were proceeded to find the optimal setup of the P, I, D parameters together with velocity error signal analysis. The shape and given signal characteristics of the velocity error signal are analyzed in details. Remote applications for controlling and monitoring the PID system from computer or smart phone, respectively, were also developed. The best setup and P, I, D parameters were set and calibration spectrum of α-Fe sample with an average nonlinearity of the velocity scale below 0.08% was collected. Furthermore, the width of the spectral line below 0.30 mm/s was observed. Powerful and complex velocity driving system was designed.

  8. The effect of stimulus strength on binocular rivalry rate in healthy individuals: Implications for genetic, clinical and individual differences studies.

    PubMed

    Law, Phillip C F; Miller, Steven M; Ngo, Trung T

    2017-11-01

    Binocular rivalry (BR) occurs when conflicting images concurrently presented to corresponding retinal locations of each eye stochastically alternate in perception. Anomalies of BR rate have been examined in a range of clinical psychiatric conditions. In particular, slow BR rate has been proposed as an endophenotype for bipolar disorder (BD) to improve power in large-scale genome-wide association studies. Examining the validity of BR rate as a BD endophenotype however requires large-scale datasets (n=1000s to 10,000s), a standardized testing protocol, and optimization of stimulus parameters to maximize separation between BD and healthy groups. Such requirements are indeed relevant to all clinical psychiatric BR studies. Here we address the issue of stimulus optimization by examining the effect of stimulus parameter variation on BR rate and mixed-percept duration (MPD) in healthy individuals. We aimed to identify the stimulus parameters that induced the fastest BR rates with the least MPD. Employing a repeated-measures within-subjects design, 40 healthy adults completed four BR tasks using orthogonally drifting grating stimuli that varied in drift speed and aperture size. Pairwise comparisons were performed to determine modulation of BR rate and MPD by these stimulus parameters, and individual variation of such modulation was also assessed. From amongst the stimulus parameters examined, we found that 8cycles/s drift speed in a 1.5° aperture induced the fastest BR rate without increasing MPD, but that BR rate with this stimulus configuration was not substantially different to BR rate with stimulus parameters we have used in previous studies (i.e., 4cycles/s drift speed in a 1.5° aperture). In addition to contributing to stimulus optimization issues, the findings have implications for Levelt's Proposition IV of binocular rivalry dynamics and individual differences in such dynamics. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Parameter Optimization Analysis of Prolonged Analgesia Effect of tDCS on Neuropathic Pain Rats

    PubMed Central

    Wen, Hui-Zhong; Gao, Shi-Hao; Zhao, Yan-Dong; He, Wen-Juan; Tian, Xue-Long; Ruan, Huai-Zhen

    2017-01-01

    Background: Transcranial direct current stimulation (tDCS) is widely used to treat human nerve disorders and neuropathic pain by modulating the excitability of cortex. The effectiveness of tDCS is influenced by its stimulation parameters, but there have been no systematic studies to help guide the selection of different parameters. Objective: This study aims to assess the effects of tDCS of primary motor cortex (M1) on chronic neuropathic pain in rats and to test for the optimal parameter combinations for analgesia. Methods: Using the chronic neuropathic pain models of chronic constriction injury (CCI), we measured pain thresholds before and after anodal-tDCS (A-tDCS) using different parameter conditions, including stimulation intensity, stimulation time, intervention time and electrode located (ipsilateral or contralateral M1 of the ligated paw on male/female CCI models). Results: Following the application of A-tDCS over M1, we observed that the antinociceptive effects were depended on different parameters. First, we found that repetitive A-tDCS had a longer analgesic effect than single stimulus, and both ipsilateral-tDCS (ip-tDCS) and contralateral-tDCS (con-tDCS) produce a long-lasting analgesic effect on neuropathic pain. Second, the antinociceptive effects were intensity-dependent and time-dependent, high intensities worked better than low intensities and long stimulus durations worked better than short stimulus durations. Third, timing of the intervention after injury affected the stimulation outcome, early use of tDCS was an effective method to prevent the development of pain, and more frequent intervention induced more analgesia in CCI rats, finally, similar antinociceptive effects of con- and ip-tDCS were observed in both sexes of CCI rats. Conclusion: Optimized protocols of tDCS for treating antinociceptive effects were developed. These findings should be taken into consideration when using tDCS to produce analgesic effects in clinical applications. PMID:28659772

  10. On developing an optimal design procedure for a bimorph piezoelectric cantilever energy harvester under a predefined volume

    NASA Astrophysics Data System (ADS)

    Aboulfotoh, Noha; Twiefel, Jens

    2018-06-01

    A typical vibration harvester is tuned to operate at resonance in order to maximize the power output. There are many design parameter sets for tuning the harvester to a specific frequency, even for simple geometries. This work studies the impact of the geometrical parameters on the harvested power while keeping the resonance frequency constant in order to find the combination of the parameters that optimizes the power under a predefined volume. A bimorph piezoelectric cantilever is considered for the study. It consists of two piezoelectric layers and a middle non-piezoelectric layer and holds a tip mass. A theoretical model was derived to obtain the system parameters and the power as functions of the design parameters. Formulas for the optimal load resistance that provide maximum power capability at resonance and anti-resonance frequency were derived. The influence of the width on the power is studied, considering a constant mass ratio (between the tip mass and the mass of the beam). This keeps the resonance frequency constant while changing the width. The influence of the ratio between the thickness of the middle layer and that of the piezoelectric layer is also studied. It is assumed that the total thickness of the cantilever is constant and the middle layer has the same mechanical properties (elasticity and density) as the piezoelectric layer. This keeps the resonance frequency constant while changing the ratio between the thicknesses. Finally, the influence of increasing the free length as well as of increasing the mass ratio on the power is investigated. This is done by first, increasing each of them individually and secondly, by increasing each of them simultaneously while increasing the total thickness under the condition of maintaining a constant resonance frequency. Based on the analysis of these influences, recommendations as to how to maximize the geometrical parameters within the available volume and mass are presented.

  11. New generation photoelectric converter structure optimization using nano-structured materials

    NASA Astrophysics Data System (ADS)

    Dronov, A.; Gavrilin, I.; Zheleznyakova, A.

    2014-12-01

    In present work the influence of anodizing process parameters on PAOT geometric parameters for optimizing and increasing ETA-cell efficiency was studied. During the calculations optimal geometrical parameters were obtained. Parameters such as anodizing current density, electrolyte composition and temperature, as well as the anodic oxidation process time were selected for this investigation. Using the optimized TiO2 photoelectrode layer with 3,6 μm porous layer thickness and pore diameter more than 80 nm the ETA-cell efficiency has been increased by 3 times comparing to not nanostructured TiO2 photoelectrode.

  12. Optimization of multilayer neural network parameters for speaker recognition

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  13. Quantum Adiabatic Brachistochrone

    NASA Astrophysics Data System (ADS)

    Rezakhani, A. T.; Kuo, W.-J.; Hamma, A.; Lidar, D. A.; Zanardi, P.

    2009-08-01

    We formulate a time-optimal approach to adiabatic quantum computation (AQC). A corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. This geometrization of AQC is demonstrated through two examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance.

  14. Quantum adiabatic brachistochrone.

    PubMed

    Rezakhani, A T; Kuo, W-J; Hamma, A; Lidar, D A; Zanardi, P

    2009-08-21

    We formulate a time-optimal approach to adiabatic quantum computation (AQC). A corresponding natural Riemannian metric is also derived, through which AQC can be understood as the problem of finding a geodesic on the manifold of control parameters. This geometrization of AQC is demonstrated through two examples, where we show that it leads to improved performance of AQC, and sheds light on the roles of entanglement and curvature of the control manifold in algorithmic performance.

  15. Optimal networks of future gravitational-wave telescopes

    NASA Astrophysics Data System (ADS)

    Raffai, Péter; Gondán, László; Heng, Ik Siong; Kelecsényi, Nándor; Logue, Josh; Márka, Zsuzsa; Márka, Szabolcs

    2013-08-01

    We aim to find the optimal site locations for a hypothetical network of 1-3 triangular gravitational-wave telescopes. We define the following N-telescope figures of merit (FoMs) and construct three corresponding metrics: (a) capability of reconstructing the signal polarization; (b) accuracy in source localization; and (c) accuracy in reconstructing the parameters of a standard binary source. We also define a combined metric that takes into account the three FoMs with practically equal weight. After constructing a geomap of possible telescope sites, we give the optimal 2-telescope networks for the four FoMs separately in example cases where the location of the first telescope has been predetermined. We found that based on the combined metric, placing the first telescope to Australia provides the most options for optimal site selection when extending the network with a second instrument. We suggest geographical regions where a potential second and third telescope could be placed to get optimal network performance in terms of our FoMs. Additionally, we use a similar approach to find the optimal location and orientation for the proposed LIGO-India detector within a five-detector network with Advanced LIGO (Hanford), Advanced LIGO (Livingston), Advanced Virgo, and KAGRA. We found that the FoMs do not change greatly in sites within India, though the network can suffer a significant loss in reconstructing signal polarizations if the orientation angle of an L-shaped LIGO-India is not set to the optimal value of ˜58.2°( + k × 90°) (measured counterclockwise from East to the bisector of the arms).

  16. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  17. Mesoscale Interfacial Dynamics in Magnetoelectric Nanocomposites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khachaturyan, Armen G.

    Theory and modeling of chessboard-like self-assembling of vertically aligned columnar nanostructures in films has been developed. By means of modeling and three-dimensional computational simulations, we proposed a novel self-assembly process that can produce good chessboard nanostructure architectures through a pseudo-spinodal decomposition of an epitaxial film under optimal thermodynamic and crystallographic conditions (appropriate choice of the temperature, composition of the film, and crystal lattice parameters of the film and substrate). These conditions are formulated. The obtained results have been published on Nano Letters. Based on the principles of the formation of chessboard nanostructured films, we are currently trying to find goodmore » decomposing material systems that satisfy the optimal conditions to produce the chessboard nanostructure architecture. In addition we are under way doing 'computer experiments' to look for the appropriate materials with the chessboard columnar nanostructures, as a potential candidate for engineering of optical devices, high-efficiency multiferroics, and high-density magnetic perpendicular recording media. We are also currently to investigate the magnetoelectric response of multiferroic chessboard nanostructures under applied electric/magnetic fields. A unified 3-dimensional phase field theory of the strain-mediated magnetoelectric effect in magnetoelectric composites is developed. The theory is based on the established equivalency paradigm: we proved that by using a variational priciple the exact values of the electric, magnetic and strain fields in a magnetoelectric composite of arbitrary morphology and their coupled magneto-electric-mechanical response can be evaluated by considering an equivalent homogeneous system with the specially chosen effective eigenstrain, polarization and magnetization. These equivalency parameters are spatially inhomogeneous fields, which are obtained by solving the time-dependent Ginzburg-Landau equations. The paper summarizing these results is to be submitted to JAP. We are currently using the computational model based on the unified phase field theory to predict the local and overall response of the magnetoelectric composites with arbitrary configuration under applied fields, and to find the optimal composite microstructure that produces the strongest ME coupling. We have developed modeling and simulations to support Dr. S. Pryia efforts to produce the strongest ME coupling by searching the optimal configuration of applied electric/magnetic fields, and microstructure of polycrystalline multiferroics. An analytical model demonstrates that the optimization of a magnetoelectric (ME) coupling of a laminar magnetic/piezoelectric polycrystalline composite could be obtained by a proper choice of the magnetic and electric poling directions and the directions of the applied a.c. fields. The results have been published on JAP. Our next step is to determine the domain of optimal parameters and configurations by using our optimization theory and computational modeling.« less

  18. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  19. Optimal Control Method of Robot End Position and Orientation Based on Dynamic Tracking Measurement

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Xu, Lijuan

    2018-01-01

    In order to improve the accuracy of robot pose positioning and control, this paper proposed a dynamic tracking measurement robot pose optimization control method based on the actual measurement of D-H parameters of the robot, the parameters is taken with feedback compensation of the robot, according to the geometrical parameters obtained by robot pose tracking measurement, improved multi sensor information fusion the extended Kalan filter method, with continuous self-optimal regression, using the geometric relationship between joint axes for kinematic parameters in the model, link model parameters obtained can timely feedback to the robot, the implementation of parameter correction and compensation, finally we can get the optimal attitude angle, realize the robot pose optimization control experiments were performed. 6R dynamic tracking control of robot joint robot with independent research and development is taken as experimental subject, the simulation results show that the control method improves robot positioning accuracy, and it has the advantages of versatility, simplicity, ease of operation and so on.

  20. Alkyl piperidine and piperazine hydroxamic acids as HDAC inhibitors.

    PubMed

    Rossi, Cristina; Porcelloni, Marina; D'Andrea, Piero; Fincham, Christopher I; Ettorre, Alessandro; Mauro, Sandro; Squarcia, Antonella; Bigioni, Mario; Parlani, Massimo; Nardelli, Federica; Binaschi, Monica; Maggi, Carlo A; Fattori, Daniela

    2011-04-15

    We report here the strategy used in our research group to find a new class of histone deacetylase (HDAC) inhibitors. A series of N-substituted 4-alkylpiperazine and 4-alkylpiperidine hydroxamic acids, corresponding to the basic structure of HDAC inhibitors (zinc binding moiety-linker-capping group) has been designed, prepared, and tested for HDAC inhibition. Linker length and aromatic capping group connection were systematically varied to find the optimal geometric parameters. A new series of submicromolar inhibitors was thus identified, which showed antiproliferative activity on HCT-116 colon carcinoma cells. Copyright © 2011 Elsevier Ltd. All rights reserved.

Top