Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
Yan, Bin-Jun; Guo, Zheng-Tai; Qu, Hai-Bin; Zhao, Bu-Chang; Zhao, Tao
2013-06-01
In this work, a feedforward control strategy basing on the concept of quality by design was established for the manufacturing process of traditional Chinese medicine to reduce the impact of the quality variation of raw materials on drug. In the research, the ethanol precipitation process of Danhong injection was taken as an application case of the method established. Box-Behnken design of experiments was conducted. Mathematical models relating the attributes of the concentrate, the process parameters and the quality of the supernatants produced were established. Then an optimization model for calculating the best process parameters basing on the attributes of the concentrate was built. The quality of the supernatants produced by ethanol precipitation with optimized and non-optimized process parameters were compared. The results showed that using the feedforward control strategy for process parameters optimization can control the quality of the supernatants effectively. The feedforward control strategy proposed can enhance the batch-to-batch consistency of the supernatants produced by ethanol precipitation.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Simple Example of Backtest Overfitting (SEBO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less
Energy optimization for upstream data transfer in 802.15.4 beacon-enabled star formulation
NASA Astrophysics Data System (ADS)
Liu, Hua; Krishnamachari, Bhaskar
2008-08-01
Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance Delta V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this Delta V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An low-lunar orbit example demonstrates the Delta V savings from the feasible solution to the optimal solution. The strategy s extensibility to more complex missions is discussed, as well as the limitations of its use.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Strategie de commande optimale de la production electrique dans un site isole
NASA Astrophysics Data System (ADS)
Barris, Nicolas
Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.
Non-adaptive and adaptive hybrid approaches for enhancing water quality management
NASA Astrophysics Data System (ADS)
Kalwij, Ineke M.; Peralta, Richard C.
2008-09-01
SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Optimal placement of tuning masses for vibration reduction in helicopter rotor blades
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1988-01-01
Described are methods for reducing vibration in helicopter rotor blades by determining optimum sizes and locations of tuning masses through formal mathematical optimization techniques. An optimization procedure is developed which employs the tuning masses and corresponding locations as design variables which are systematically changed to achieve low values of shear without a large mass penalty. The finite-element structural analysis of the blade and the optimization formulation require development of discretized expressions for two performance parameters: modal shaping parameter and modal shear amplitude. Matrix expressions for both quantities and their sensitivity derivatives are developed. Three optimization strategies are developed and tested. The first is based on minimizing the modal shaping parameter which indirectly reduces the modal shear amplitudes corresponding to each harmonic of airload. The second strategy reduces these amplitudes directly, and the third strategy reduces the shear as a function of time during a revolution of the blade. The first strategy works well for reducing the shear for one mode responding to a single harmonic of the airload, but has been found in some cases to be ineffective for more than one mode. The second and third strategies give similar results and show excellent reduction of the shear with a low mass penalty.
Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo
2015-03-01
The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
On the robust optimization to the uncertain vaccination strategy problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccinationmore » strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.« less
A new real-time guidance strategy for aerodynamic ascent flight
NASA Astrophysics Data System (ADS)
Yamamoto, Takayuki; Kawaguchi, Jun'ichiro
2007-12-01
Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
Optimal resource allocation strategy for two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu
2018-02-01
We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.
Optimal Trajectories Generation in Robotic Fiber Placement Systems
NASA Astrophysics Data System (ADS)
Gao, Jiuchun; Pashkevich, Anatol; Caro, Stéphane
2017-06-01
The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy
Wen, Hui; Xie, Weixin; Pei, Jihong
2016-01-01
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms. PMID:27792737
NASA Astrophysics Data System (ADS)
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
2016-06-01
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
Dynamic optimal strategies in transboundary pollution game under learning by doing
NASA Astrophysics Data System (ADS)
Chang, Shuhua; Qin, Weihua; Wang, Xinyu
2018-01-01
In this paper, we present a transboundary pollution game, in which emission permits trading and pollution abatement costs under learning by doing are considered. In this model, the abatement cost mainly depends on the level of pollution abatement and the experience of using pollution abatement technology. We use optimal control theory to investigate the optimal emission paths and the optimal pollution abatement strategies under cooperative and noncooperative games, respectively. Additionally, the effects of parameters on the results have been examined.
An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet
NASA Astrophysics Data System (ADS)
Jin, Weixia; Han, Jun
2018-01-01
Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.
A new inertia weight control strategy for particle swarm optimization
NASA Astrophysics Data System (ADS)
Zhu, Xianming; Wang, Hongbo
2018-04-01
Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.
Influence of Fallible Item Parameters on Test Information During Adaptive Testing.
ERIC Educational Resources Information Center
Wetzel, C. Douglas; McBride, James R.
Computer simulation was used to assess the effects of item parameter estimation errors on different item selection strategies used in adaptive and conventional testing. To determine whether these effects reduced the advantages of certain optimal item selection strategies, simulations were repeated in the presence and absence of item parameter…
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Comparison of empirical strategies to maximize GENEHUNTER lod scores.
Chen, C H; Finch, S J; Mendell, N R; Gordon, D
1999-01-01
We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Wang, Y; Harrison, M; Clark, B J
2006-02-10
An optimization strategy for the separation of an acidic mixture by employing a monolithic stationary phase is presented, with the aid of experimental design and response surface methodology (RSM). An orthogonal array design (OAD) OA(16) (2(15)) was used to choose the significant parameters for the optimization. The significant factors were optimized by using a central composite design (CCD) and the quadratic models between the dependent and the independent parameters were built. The mathematical models were tested on a number of simulated data set and had a coefficient of R(2) > 0.97 (n = 16). On applying the optimization strategy, the factor effects were visualized as three-dimensional (3D) response surfaces and contour plots. The optimal condition was achieved in less than 40 min by using the monolithic packing with the mobile phase of methanol/20 mM phosphate buffer pH 2.7 (25.5/74.5, v/v). The method showed good agreement between the experimental data and predictive value throughout the studied parameter space and were suitable for optimization studies on the monolithic stationary phase for acidic compounds.
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
Dynamic Portfolio Strategy Using Clustering Approach
Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333
Dynamic Portfolio Strategy Using Clustering Approach.
Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
Product modular design incorporating preventive maintenance issues
NASA Astrophysics Data System (ADS)
Gao, Yicong; Feng, Yixiong; Tan, Jianrong
2016-03-01
Traditional modular design methods lead to product maintenance problems, because the module form of a system is created according to either the function requirements or the manufacturing considerations. For solving these problems, a new modular design method is proposed with the considerations of not only the traditional function related attributes, but also the maintenance related ones. First, modularity parameters and modularity scenarios for product modularity are defined. Then the reliability and economic assessment models of product modularity strategies are formulated with the introduction of the effective working age of modules. A mathematical model used to evaluate the difference among the modules of the product so that the optimal module of the product can be established. After that, a multi-objective optimization problem based on metrics for preventive maintenance interval different degrees and preventive maintenance economics is formulated for modular optimization. Multi-objective GA is utilized to rapidly approximate the Pareto set of optimal modularity strategy trade-offs between preventive maintenance cost and preventive maintenance interval difference degree. Finally, a coordinate CNC boring machine is adopted to depict the process of product modularity. In addition, two factorial design experiments based on the modularity parameters are constructed and analyzed. These experiments investigate the impacts of these parameters on the optimal modularity strategies and the structure of module. The research proposes a new modular design method, which may help to improve the maintainability of product in modular design.
Optimal Design of Material and Process Parameters in Powder Injection Molding
NASA Astrophysics Data System (ADS)
Ayad, G.; Barriere, T.; Gelin, J. C.; Song, J.; Liu, B.
2007-04-01
The paper is concerned with optimization and parametric identification for the different stages in Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders part by solid state diffusion. In the first part, one describes an original methodology to optimize the process and geometry parameters in injection stage based on the combination of design of experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometeric curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization of material and process parameters for manufacturing a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.
Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.
Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing
2016-06-01
The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cost related sensitivity analysis for optimal operation of a grid-parallel PEM fuel cell power plant
NASA Astrophysics Data System (ADS)
El-Sharkh, M. Y.; Tanrioven, M.; Rahman, A.; Alam, M. S.
Fuel cell power plants (FCPP) as a combined source of heat, power and hydrogen (CHP&H) can be considered as a potential option to supply both thermal and electrical loads. Hydrogen produced from the FCPP can be stored for future use of the FCPP or can be sold for profit. In such a system, tariff rates for purchasing or selling electricity, the fuel cost for the FCPP/thermal load, and hydrogen selling price are the main factors that affect the operational strategy. This paper presents a hybrid evolutionary programming and Hill-Climbing based approach to evaluate the impact of change of the above mentioned cost parameters on the optimal operational strategy of the FCPP. The optimal operational strategy of the FCPP for different tariffs is achieved through the estimation of the following: hourly generated power, the amount of thermal power recovered, power trade with the local grid, and the quantity of hydrogen that can be produced. Results show the importance of optimizing system cost parameters in order to minimize overall operating cost.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Optimal control of an invasive species using a reaction-diffusion model and linear programming
Bonneau, Mathieu; Johnson, Fred A.; Smith, Brian J.; Romagosa, Christina M.; Martin, Julien; Mazzotti, Frank J.
2017-01-01
Managing an invasive species is particularly challenging as little is generally known about the species’ biological characteristics in its new habitat. In practice, removal of individuals often starts before the species is studied to provide the information that will later improve control. Therefore, the locations and the amount of control have to be determined in the face of great uncertainty about the species characteristics and with a limited amount of resources. We propose framing spatial control as a linear programming optimization problem. This formulation, paired with a discrete reaction-diffusion model, permits calculation of an optimal control strategy that minimizes the remaining number of invaders for a fixed cost or that minimizes the control cost for containment or protecting specific areas from invasion. We propose computing the optimal strategy for a range of possible model parameters, representing current uncertainty on the possible invasion scenarios. Then, a best strategy can be identified depending on the risk attitude of the decision-maker. We use this framework to study the spatial control of the Argentine black and white tegus (Salvator merianae) in South Florida. There is uncertainty about tegu demography and we considered several combinations of model parameters, exhibiting various dynamics of invasion. For a fixed one-year budget, we show that the risk-averse strategy, which optimizes the worst-case scenario of tegus’ dynamics, and the risk-neutral strategy, which optimizes the expected scenario, both concentrated control close to the point of introduction. A risk-seeking strategy, which optimizes the best-case scenario, focuses more on models where eradication of the species in a cell is possible and consists of spreading control as much as possible. For the establishment of a containment area, assuming an exponential growth we show that with current control methods it might not be possible to implement such a strategy for some of the models that we considered. Including different possible models allows an examination of how the strategy is expected to perform in different scenarios. Then, a strategy that accounts for the risk attitude of the decision-maker can be designed.
Optimal tyre usage for a Formula One car
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Limebeer, D. J. N.
2016-10-01
Variations in track temperature, surface conditions and layout have led tyre manufacturers to produce a range of rubber compounds for race events. Each compound has unique friction and durability characteristics. Efficient tyre management over a full race distance is a crucial component of a competitive race strategy. A minimum lap time optimal control calculation and a thermodynamic tyre wear model are used to establish optimal tyre warming and tyre usage strategies. Lap time sensitivities demonstrate that relatively small changes in control strategy can lead to significant reductions in the associated wear metrics. The illustrated methodology shows how vehicle setup parameters can be optimised for minimum tyre usage.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Parameter estimation of a pulp digester model with derivative-free optimization strategies
NASA Astrophysics Data System (ADS)
Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.
2017-07-01
The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Influence of signal processing strategy in auditory abilities.
Melo, Tatiana Mendes de; Bevilacqua, Maria Cecília; Costa, Orozimbo Alves; Moret, Adriane Lima Mortari
2013-01-01
The signal processing strategy is a parameter that may influence the auditory performance of cochlear implant and is important to optimize this parameter to provide better speech perception, especially in difficult listening situations. To evaluate the individual's auditory performance using two different signal processing strategy. Prospective study with 11 prelingually deafened children with open-set speech recognition. A within-subjects design was used to compare performance with standard HiRes and HiRes 120 in three different moments. During test sessions, subject's performance was evaluated by warble-tone sound-field thresholds, speech perception evaluation, in quiet and in noise. In the silence, children S1, S4, S5, S7 showed better performance with the HiRes 120 strategy and children S2, S9, S11 showed better performance with the HiRes strategy. In the noise was also observed that some children performed better using the HiRes 120 strategy and other with HiRes. Not all children presented the same pattern of response to the different strategies used in this study, which reinforces the need to look at optimizing cochlear implant clinical programming.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
What is the Optimal Strategy for Adaptive Servo-Ventilation Therapy?
Imamura, Teruhiko; Kinugawa, Koichiro
2018-05-23
Clinical advantages in the adaptive servo-ventilation (ASV) therapy have been reported in selected heart failure patients with/without sleep-disorder breathing, whereas multicenter randomized control trials could not demonstrate such advantages. Considering this discrepancy, optimal patient selection and device setting may be a key for the successful ASV therapy. Hemodynamic and echocardiographic parameters indicating pulmonary congestion such as elevated pulmonary capillary wedge pressure were reported as predictors of good response to ASV therapy. Recently, parameters indicating right ventricular dysfunction also have been reported as good predictors. Optimal device setting with appropriate pressure setting during appropriate time may also be a key. Large-scale prospective trial with optimal patient selection and optimal device setting is warranted.
Balancing on tightropes and slacklines
Paoletti, P.; Mahadevan, L.
2012-01-01
Balancing on a tightrope or a slackline is an example of a neuromechanical task where the whole body both drives and responds to the dynamics of the external environment, often on multiple timescales. Motivated by a range of neurophysiological observations, here we formulate a minimal model for this system and use optimal control theory to design a strategy for maintaining an upright position. Our analysis of the open and closed-loop dynamics shows the existence of an optimal rope sag where balancing requires minimal effort, consistent with qualitative observations and suggestive of strategies for optimizing balancing performance while standing and walking. Our consideration of the effects of nonlinearities, potential parameter coupling and delays on the overall performance shows that although these factors change the results quantitatively, the existence of an optimal strategy persists. PMID:22513724
Parameters optimization for the energy management system of hybrid electric vehicle
NASA Astrophysics Data System (ADS)
Tseng, Chyuan-Yow; Hung, Yi-Hsuan; Tsai, Chien-Hsiung; Huang, Yu-Jen
2007-12-01
Hybrid electric vehicle (HEV) has been widely studied recently due to its high potential in reduction of fuel consumption, exhaust emission, and lower noise. Because of comprised of two power sources, the HEV requires an energy management system (EMS) to distribute optimally the power sources for various driving conditions. The ITRI in Taiwan has developed a HEV consisted of a 2.2L internal combustion engine (ICE), a 18KW motor/generator (M/G), a 288V battery pack, and a continuous variable transmission (CVT). The task of the present study is to design an energy management strategy of the EMS for the HEV. Due to the nonlinear nature and the fact of unknown system model of the system, a kind of simplex method based energy management strategy is proposed for the HEV system. The simplex method is a kind of optimization strategy which is generally used to find out the optimal parameters for un-modeled systems. The way to apply the simplex method for the design of the EMS is presented. The feasibility of the proposed method was verified by perform numerical simulation on the FTP75 drive cycles.
Active model-based balancing strategy for self-reconfigurable batteries
NASA Astrophysics Data System (ADS)
Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter
2016-08-01
This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Optimization Strategies for Single-Stage, Multi-Stage and Continuous ADRs
NASA Technical Reports Server (NTRS)
Shirron, Peter J.
2014-01-01
Adiabatic Demagnetization Refrigerators (ADR) have many advantages that are prompting a resurgence in their use in spaceflight and laboratory applications. They are solid-state coolers capable of very high efficiency and very wide operating range. However, their low energy storage density translates to larger mass for a given cooling capacity than is possible with other refrigeration techniques. The interplay between refrigerant mass and other parameters such as magnetic field and heat transfer points in multi-stage ADRs gives rise to a wide parameter space for optimization. This paper first presents optimization strategies for single ADR stages, focusing primarily on obtaining the largest cooling capacity per stage mass, then discusses the optimization of multi-stage and continuous ADRs in the context of the coordinated heat transfer that must occur between stages. The goal for the latter is usually to obtain the largest cooling power per mass or volume, but there can also be many secondary objectives, such as limiting instantaneous heat rejection rates and producing intermediate temperatures for cooling of other instrument components.
NASA Astrophysics Data System (ADS)
Zhou, Daming; Al-Durra, Ahmed; Gao, Fei; Ravey, Alexandre; Matraji, Imad; Godoy Simões, Marcelo
2017-10-01
Energy management strategy plays a key role for Fuel Cell Hybrid Electric Vehicles (FCHEVs), it directly affects the efficiency and performance of energy storages in FCHEVs. For example, by using a suitable energy distribution controller, the fuel cell system can be maintained in a high efficiency region and thus saving hydrogen consumption. In this paper, an energy management strategy for online driving cycles is proposed based on a combination of the parameters from three offline optimized fuzzy logic controllers using data fusion approach. The fuzzy logic controllers are respectively optimized for three typical driving scenarios: highway, suburban and city in offline. To classify patterns of online driving cycles, a Probabilistic Support Vector Machine (PSVM) is used to provide probabilistic classification results. Based on the classification results of the online driving cycle, the parameters of each offline optimized fuzzy logic controllers are then fused using Dempster-Shafer (DS) evidence theory, in order to calculate the final parameters for the online fuzzy logic controller. Three experimental validations using Hardware-In-the-Loop (HIL) platform with different-sized FCHEVs have been performed. Experimental comparison results show that, the proposed PSVM-DS based online controller can achieve a relatively stable operation and a higher efficiency of fuel cell system in real driving cycles.
Inverse design of bulk morphologies in block copolymers using particle swarm optimization
NASA Astrophysics Data System (ADS)
Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn
Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.
Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A
2006-01-01
Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
NASA Astrophysics Data System (ADS)
He, Zhengbing; Chen, Bokui; Jia, Ning; Guan, Wei; Lin, Benchuan; Wang, Binghong
2014-12-01
To alleviate traffic congestion, a variety of route guidance strategies have been proposed for intelligent transportation systems. A number of strategies are introduced and investigated on a symmetric two-route traffic network over the past decade. To evaluate the strategies in a more general scenario, this paper conducts eight prevalent strategies on an asymmetric two-route traffic network with different slowdown behaviors on alternative routes. The results show that only mean velocity feedback strategy (MVFS) is able to equalize travel time, i.e. approximate user optimality (UO); while the others fail due to incapability of establishing relations between the feedback parameters and travel time. The paper helps better understand these strategies, and suggests MVFS if the authority intends to achieve user optimality.
NASA Astrophysics Data System (ADS)
Wibowo, Y. T.; Baskoro, S. Y.; Manurung, V. A. T.
2018-02-01
Plastic based products spread all over the world in many aspects of life. The ability to substitute other materials is getting stronger and wider. The use of plastic materials increases and become unavoidable. Plastic based mass production requires injection process as well Mold. The milling process of plastic mold steel material was done using HSS End Mill cutting tool that is widely used in a small and medium enterprise for the reason of its ability to be re sharpened and relatively inexpensive. Study on the effect of the geometry tool states that it has an important effect on the quality improvement. Cutting speed, feed rate, depth of cut and radii are input parameters beside to the tool path strategy. This paper aims to investigate input parameter and cutting tools behaviors within some different tool path strategy. For the reason of experiments efficiency Taguchi method and ANOVA were used. Response studied is surface roughness and cutting behaviors. By achieving the expected quality, no more additional process is required. Finally, the optimal combination of machining parameters will deliver the expected roughness and of course totally reduced cutting time. However actually, SMEs do not optimally use this data for cost reduction.
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M. A.
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods. PMID:24883374
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M A
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods.
Numerical solution of a conspicuous consumption model with constant control delay☆
Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea
2011-01-01
We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
Uncertainty in BMP evaluation and optimization for watershed management
NASA Astrophysics Data System (ADS)
Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.
2012-12-01
Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing
2018-02-01
Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Optimization of 15 parameters influencing the long-term survival of bacteria in aquatic systems
NASA Technical Reports Server (NTRS)
Obenhuber, D. C.
1993-01-01
NASA is presently engaged in the design and development of a water reclamation system for the future space station. A major concern in processing water is the control of microbial contamination. As a means of developing an optimal microbial control strategy, studies were undertaken to determine the type and amount of contamination which could be expected in these systems under a variety of changing environmental conditions. A laboratory-based Taguchi optimization experiment was conducted to determine the ideal settings for 15 parameters which influence the survival of six bacterial species in aquatic systems. The experiment demonstrated that the bacterial survival period could be decreased significantly by optimizing environmental conditions.
Investigation of earthquake factor for optimum tuned mass dampers
NASA Astrophysics Data System (ADS)
Nigdeli, Sinan Melih; Bekdaş, Gebrail
2012-09-01
In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.
Tepekule, Burcu; Uecker, Hildegard; Derungs, Isabel; Frenoy, Antoine; Bonhoeffer, Sebastian
2017-09-01
Multiple treatment strategies are available for empiric antibiotic therapy in hospitals, but neither clinical studies nor theoretical investigations have yielded a clear picture when which strategy is optimal and why. Extending earlier work of others and us, we present a mathematical model capturing treatment strategies using two drugs, i.e the multi-drug therapies referred to as cycling, mixing, and combination therapy, as well as monotherapy with either drug. We randomly sample a large parameter space to determine the conditions determining success or failure of these strategies. We find that combination therapy tends to outperform the other treatment strategies. By using linear discriminant analysis and particle swarm optimization, we find that the most important parameters determining success or failure of combination therapy relative to the other treatment strategies are the de novo rate of emergence of double resistance in patients infected with sensitive bacteria and the fitness costs associated with double resistance. The rate at which double resistance is imported into the hospital via patients admitted from the outside community has little influence, as all treatment strategies are affected equally. The parameter sets for which combination therapy fails tend to fall into areas with low biological plausibility as they are characterised by very high rates of de novo emergence of resistance to both drugs compared to a single drug, and the cost of double resistance is considerably smaller than the sum of the costs of single resistance.
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
Flexible modulation of risk attitude during decision-making under quota.
Fujimoto, Atsushi; Takahashi, Hidehiko
2016-10-01
Risk attitude is often regarded as an intrinsic parameter in the individual personality. However, ethological studies reported state-dependent strategy optimization irrespective of individual preference. To synthesize the two contrasting literatures, we developed a novel gambling task that dynamically manipulated the quota severity (required outcome to clear the task) in a course of choice trials and conducted a task-fMRI study in human participants. The participants showed their individual risk preference when they had no quota constraint ('individual-preference mode'), while they adopted state-dependent optimal strategy when they needed to achieve a quota ('strategy-optimization mode'). fMRI analyses illustrated that the interplay among prefrontal areas and salience-network areas reflected the quota severity and the utilization of the optimal strategy, shedding light on the neural substrates of the quota-dependent risk attitude. Our results demonstrated the complex nature of risk-sensitive decision-making and may provide a new perspective for the understanding of problematic risky behaviors in human. Copyright © 2016 Elsevier Inc. All rights reserved.
Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M
2017-10-24
To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.
Optimal convergence in naming game with geography-based negotiation on small-world networks
NASA Astrophysics Data System (ADS)
Liu, Run-Ran; Wang, Wen-Xu; Lai, Ying-Cheng; Chen, Guanrong; Wang, Bing-Hong
2011-01-01
We propose a negotiation strategy to address the effect of geography on the dynamics of naming games over small-world networks. Communication and negotiation frequencies between two agents are determined by their geographical distance in terms of a parameter characterizing the correlation between interaction strength and the distance. A finding is that there exists an optimal parameter value leading to fastest convergence to global consensus on naming. Numerical computations and a theoretical analysis are provided to substantiate our findings.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Suchaneck, Andre; Puente León, Fernando
2014-01-01
Depending on the actual battery temperature, electrical power demands in general have a varying impact on the life span of a battery. As electrical energy provided by the battery is needed to temper it, the question arises at which temperature which amount of energy optimally should be utilized for tempering. Therefore, the objective function that has to be optimized contains both the goal to maximize life expectancy and to minimize the amount of energy used for obtaining the first goal. In this paper, Pontryagin's maximum principle is used to derive a causal control strategy from such an objective function. The derivation of the causal strategy includes the determination of major factors that rule the optimal solution calculated with the maximum principle. The optimization is calculated offline on a desktop computer for all possible vehicle parameters and major factors. For the practical implementation in the vehicle, it is sufficient to have the values of the major factors determined only roughly in advance and the offline calculation results available. This feature sidesteps the drawback of several optimization strategies that require the exact knowledge of the future power demand. The resulting strategy's application is not limited to batteries in electric vehicles.
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
Human-in-the-loop Bayesian optimization of wearable device parameters
Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott
2017-01-01
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Tolson, Bryan; Shawn Matott, L.
2015-04-01
GLUE is one of the most commonly used informal methodologies for uncertainty estimation in hydrological modelling. Despite the ease-of-use of GLUE, it involves a number of subjective decisions such as the strategy for identifying the behavioural solutions. This study evaluates the impact of behavioural solution identification strategies in GLUE on the quality of model output uncertainty. Moreover, two new strategies are developed to objectively identify behavioural solutions. The first strategy considers Pareto-based ranking of parameter sets, while the second one is based on ranking the parameter sets based on an aggregated criterion. The proposed strategies, as well as the traditional strategies in the literature, are evaluated with respect to reliability (coverage of observations by the envelope of model outcomes) and sharpness (width of the envelope of model outcomes) in different numerical experiments. These experiments include multi-criteria calibration and uncertainty estimation of three rainfall-runoff models with different number of parameters. To demonstrate the importance of behavioural solution identification strategy more appropriately, GLUE is also compared with two other informal multi-criteria calibration and uncertainty estimation methods (Pareto optimization and DDS-AU). The results show that the model output uncertainty varies with the behavioural solution identification strategy, and furthermore, a robust GLUE implementation would require considering multiple behavioural solution identification strategies and choosing the one that generates the desired balance between sharpness and reliability. The proposed objective strategies prove to be the best options in most of the case studies investigated in this research. Implementing such an approach for a high-dimensional calibration problem enables GLUE to generate robust results in comparison with Pareto optimization and DDS-AU.
A Method of Trajectory Design for Manned Asteroid Explorations1,2
NASA Astrophysics Data System (ADS)
Gan, Qing-Bo; Zhang, Yang; Zhu, Zheng-Fan; Han, Wei-Hua; Dong, Xin
2015-07-01
A trajectory optimization method for the nuclear-electric propulsion manned asteroid explorations is presented. In the case of launching between 2035 and 2065, based on the two-pulse single-cycle Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory is selected by pruning the flight sequences in two feasible regions. Setting the flight strategy of propelling-taxiing-propelling, and taking the minimal fuel consumption as the performance index, the nuclear-electric propulsion flight trajectory is optimized using the hybrid method. Finally, taking the segmentally optimized parameters as the initial values, in accordance with the overall mission constraints, the globally optimized parameters are obtained. And the numerical and diagrammatical results are given at the same time.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
Wu, Kang; Ding, Lijian; Zhu, Peng; Li, Shuang; He, Shan
2018-04-22
The aim of this study was to determine the cumulative effect of fermentation parameters and enhance the production of docosahexaenoic acid (DHA) by Thraustochytrium sp. ATCC 26185 using response surface methodology (RSM). Among the eight variables screened for effects of fermentation parameters on DHA production by Plackett-Burman design (PBD), the initial pH, inoculum volume, and fermentation volume were found to be most significant. The Box-Behnken design was applied to derive a statistical model for optimizing these three fermentation parameters for DHA production. The optimal parameters for maximum DHA production were initial pH: 6.89, inoculum volume: 4.16%, and fermentation volume: 140.47 mL, respectively. The maximum yield of DHA production was 1.68 g/L, which was in agreement with predicted values. An increase in DHA production was achieved by optimizing the initial pH, fermentation, and inoculum volume parameters. This optimization strategy led to a significant increase in the amount of DHA produced, from 1.16 g/L to 1.68 g/L. Thraustochytrium sp. ATCC 26185 is a promising resource for microbial DHA production due to the high-level yield of DHA that it produces, and the capacity for large-scale fermentation of this organism.
Efficient calculation of higher-order optical waveguide dispersion.
Mores, J A; Malheiros-Silveira, G N; Fragnito, H L; Hernández-Figueroa, H E
2010-09-13
An efficient numerical strategy to compute the higher-order dispersion parameters of optical waveguides is presented. For the first time to our knowledge, a systematic study of the errors involved in the higher-order dispersions' numerical calculation process is made, showing that the present strategy can accurately model those parameters. Such strategy combines a full-vectorial finite element modal solver and a proper finite difference differentiation algorithm. Its performance has been carefully assessed through the analysis of several key geometries. In addition, the optimization of those higher-order dispersion parameters can also be carried out by coupling to the present scheme a genetic algorithm, as shown here through the design of a photonic crystal fiber suitable for parametric amplification applications.
NASA Astrophysics Data System (ADS)
Yuan, Yongliang; Song, Xueguan; Sun, Wei; Wang, Xiaobang
2018-05-01
The dynamic performance of a belt drive system is composed of many factors, such as the efficiency, the vibration, and the optimal parameters. The conventional design only considers the basic performance of the belt drive system, while ignoring its overall performance. To address all these challenges, the study on vibration characteristics and optimization strategies could be a feasible way. This paper proposes a new optimization strategy and takes a belt drive design optimization as a case study based on the multidisciplinary design optimization (MDO). The MDO of the belt drive system is established and the corresponding sub-systems are analyzed. The multidisciplinary optimization is performed by using an improved genetic algorithm. Based on the optimal results obtained from the MDO, the three-dimension (3D) model of the belt drive system is established for dynamics simulation by virtual prototyping. From the comparison of the results with respect to different velocities and loads, the MDO method can effectively reduce the transverse vibration amplitude. The law of the vibration displacement, the vibration frequency, and the influence of velocities on the transverse vibrations has been obtained. Results show that the MDO method is of great help to obtain the optimal structural parameters. Furthermore, the kinematics principle of the belt drive has been obtained. The belt drive design case indicates that the proposed method in this paper can also be used to solve other engineering optimization problems efficiently.
Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D
2013-03-01
For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Brumboiu, Iulia Emilia; Prokopiou, Georgia; Kronik, Leeor; Brena, Barbara
2017-07-28
We analyse the valence electronic structure of cobalt phthalocyanine (CoPc) by means of optimally tuning a range-separated hybrid functional. The tuning is performed by modifying both the amount of short-range exact exchange (α) included in the hybrid functional and the range-separation parameter (γ), with two strategies employed for finding the optimal γ for each α. The influence of these two parameters on the structural, electronic, and magnetic properties of CoPc is thoroughly investigated. The electronic structure is found to be very sensitive to the amount and range in which the exact exchange is included. The electronic structure obtained using the optimal parameters is compared to gas-phase photo-electron data and GW calculations, with the unoccupied states additionally compared with inverse photo-electron spectroscopy measurements. The calculated spectrum with tuned γ, determined for the optimal value of α = 0.1, yields a very good agreement with both experimental results and with GW calculations that well-reproduce the experimental data.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
General strategy for the protection of organs at risk in IMRT therapy of a moving body
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abolfath, Ramin M.; Papiez, Lech
2009-07-15
We investigated protection strategies of organs at risk (OARs) in intensity modulated radiation therapy (IMRT). These strategies apply to delivery of IMRT to moving body anatomies that show relative displacement of OAR in close proximity to a tumor target. We formulated an efficient genetic algorithm which makes it possible to search for global minima in a complex landscape of multiple irradiation strategies delivering a given, predetermined intensity map to a target. The optimal strategy was investigated with respect to minimizing the dose delivered to the OAR. The optimization procedure developed relies on variability of all parameters available for control ofmore » radiation delivery in modern linear accelerators, including adaptation of leaf trajectories and simultaneous modification of beam dose rate during irradiation. We showed that the optimization algorithms lead to a significant reduction in the dose delivered to OAR in cases where organs at risk move relative to a treatment target.« less
dos Santos, Bruno César Diniz Brito; Flumignan, Danilo Luiz; de Oliveira, José Eduardo
2012-10-01
A three-step development, optimization and validation strategy is described for gas chromatography (GC) fingerprints of Brazilian commercial diesel fuel. A suitable GC-flame ionization detection (FID) system was selected to assay a complex matrix such as diesel. The next step was to improve acceptable chromatographic resolution with reduced analysis time, which is recommended for routine applications. Full three-level factorial designs were performed to improve flow rate, oven ramps, injection volume and split ratio in the GC system. Finally, several validation parameters were performed. The GC fingerprinting can be coupled with pattern recognition and multivariate regressions analyses to determine fuel quality and fuel physicochemical parameters. This strategy can also be applied to develop fingerprints for quality control of other fuel types.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Jinguo
2018-07-01
Although many motion planning strategies for missions involving space robots capturing floating targets can be found in the literature, relatively little has discussed how to select the berth position where the spacecraft base hovers. In fact, the berth position is a flexible and controllable factor, and selecting a suitable berth position has a great impact on improving the efficiency of motion planning in the capture mission. Therefore, to make full use of the manoeuvrability of the space robot, this paper proposes a new viewpoint that utilizes the base berth position as an optimizable parameter to formulate a more comprehensive and effective motion planning strategy. Considering the dynamic coupling, the dynamic singularities, and the physical limitations of space robots, a unified motion planning framework based on the forward kinematics and parameter optimization technique is developed to convert the planning problem into the parameter optimization problem. For getting rid of the strict grasping position constraints in the capture mission, a new conception of grasping area is proposed to greatly simplify the difficulty of the motion planning. Furthermore, by utilizing the penalty function method, a new concise objective function is constructed. Here, the intelligent algorithm, Particle Swarm Optimization (PSO), is worked as solver to determine the free parameters. Two capturing cases, i.e., capturing a two-dimensional (2D) planar target and capturing a three-dimensional (3D) spatial target, are studied under this framework. The corresponding simulation results demonstrate that the proposed method is more efficient and effective for planning the capture missions.
Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying
2018-01-01
The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.
Flexible operation strategy for environment control system in abnormal supply power condition
NASA Astrophysics Data System (ADS)
Liping, Pang; Guoxiang, Li; Hongquan, Qu; Yufeng, Fang
2017-04-01
This paper establishes an optimization method that can be applied to the flexible operation of the environment control system in an abnormal supply power condition. A proposed conception of lifespan is used to evaluate the depletion time of the non-regenerative substance. The optimization objective function is to maximize the lifespans. The optimization variables are the allocated powers of subsystems. The improved Non-dominated Sorting Genetic Algorithm is adopted to obtain the pareto optimization frontier with the constraints of the cabin environmental parameters and the adjustable operating parameters of the subsystems. Based on the same importance of objective functions, the preferred power allocation of subsystems can be optimized. Then the corresponding running parameters of subsystems can be determined to ensure the maximum lifespans. A long-duration space station with three astronauts is used to show the implementation of the proposed optimization method. Three different CO2 partial pressure levels are taken into consideration in this study. The optimization results show that the proposed optimization method can obtain the preferred power allocation for the subsystems when the supply power is at a less-than-nominal value. The method can be applied to the autonomous control for the emergency response of the environment control system.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
NASA Astrophysics Data System (ADS)
Vongsaysy, Uyxing; Bassani, Dario M.; Servant, Laurent; Pavageau, Bertrand; Wantz, Guillaume; Aziz, Hany
2014-01-01
Polymeric bulk heterojunction (BHJ) organic solar cells represent one of the most promising technologies for renewable energy with a low fabrication cost. Control over BHJ morphology is one of the key factors in obtaining high-efficiency devices. This review focuses on formulation strategies for optimizing the BHJ morphology. We address how solvent choice and the introduction of processing additives affect the morphology. We also review a number of recent studies concerning prediction methods that utilize the Hansen solubility parameters to develop efficient solvent systems.
Zhou, Hui Jun; Dan, Yock Young; Naidoo, Nasheen; Li, Shu Chuen; Yeoh, Khay Guan
2013-01-01
Gastric cancer (GC) surveillance based on oesophagogastroduodenoscopy (OGD) appears to be a promising strategy for GC prevention. By evaluating the cost-effectiveness of endoscopic surveillance in Singaporean Chinese, this study aimed to inform the implementation of such a program in a population with a low to intermediate GC risk. USING A REFERENCE STRATEGY OF NO OGD INTERVENTION, WE EVALUATED FOUR STRATEGIES: 2-yearly OGD surveillance, annual OGD surveillance, 2-yearly OGD screening and 2-yearly screening plus annual surveillance in Singaporean Chinese aged 50-69 years. From a perspective of the healthcare system, Markov models were built to simulate the life experience of the target population. The models projected discounted lifetime costs ($), quality adjusted life year (QALY), and incremental cost-effectiveness ratio (ICER) indicating the cost-effectiveness of each strategy against a Singapore willingness-to-pay of $46,200/QALY. Deterministic and probabilistic sensitivity analyses were used to identify the influential variables and their associated thresholds, and to quantify the influence of parameter uncertainties respectively. With an ICER of $44,098/QALY, the annual OGD surveillance was the optimal strategy while the 2-yearly surveillance was the most cost-effective strategy (ICER = $25,949/QALY). The screening-based strategies were either extendedly dominated or cost-ineffective. The cost-effectiveness heterogeneity of the four strategies was observed across age-gender subgroups. Eight influential parameters were identified each with their specific thresholds to define the choice of optimal strategy. Accounting for the model uncertainties, the probability that the annual surveillance is the optimal strategy in Singapore was 44.5%. Endoscopic surveillance is potentially cost-effective in the prevention of GC for populations at low to intermediate risk. Regarding program implementation, a detailed analysis of influential factors and their associated thresholds is necessary. Multiple strategies should be considered in order to recommend the right strategy for the right population.
Vector boson fusion in the inert doublet model
NASA Astrophysics Data System (ADS)
Dutta, Bhaskar; Palacio, Guillermo; Restrepo, Diego; Ruiz-Álvarez, José D.
2018-03-01
In this paper we probe the inert Higgs doublet model at the LHC using vector boson fusion (VBF) search strategy. We optimize the selection cuts and investigate the parameter space of the model and we show that the VBF search has a better reach when compared with the monojet searches. We also investigate the Drell-Yan type cuts and show that they can be important for smaller charged Higgs masses. We determine the 3 σ reach for the parameter space using these optimized cuts for a luminosity of 3000 fb-1 .
Well-temperate phage: Optimal bet-hedging against local environmental collapses
Maslov, Sergei; Sneppen, Kim
2015-06-02
Upon infection of their bacterial hosts temperate phages must chose between lysogenic and lytic developmental strategies. Here we apply the game-theoretic bet-hedging strategy introduced by Kelly to derive the optimal lysogenic fraction of the total population of phages as a function of frequency and intensity of environmental downturns affecting the lytic subpopulation. “Well-temperate” phage from our title is characterized by the best long-term population growth rate. We show that it is realized when the lysogenization frequency is approximately equal to the probability of lytic population collapse. We further predict the existence of sharp boundaries in system’s environmental, ecological, and biophysicalmore » parameters separating the regions where this temperate strategy is optimal from those dominated by purely virulent or dormant (purely lysogenic) strategies. We show that the virulent strategy works best for phages with large diversity of hosts, and access to multiple independent environments reachable by diffusion. Conversely, progressively more temperate or even dormant strategies are favored in the environments, that are subject to frequent and severe temporal downturns.« less
Well-temperate phage: Optimal bet-hedging against local environmental collapses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maslov, Sergei; Sneppen, Kim
Upon infection of their bacterial hosts temperate phages must chose between lysogenic and lytic developmental strategies. Here we apply the game-theoretic bet-hedging strategy introduced by Kelly to derive the optimal lysogenic fraction of the total population of phages as a function of frequency and intensity of environmental downturns affecting the lytic subpopulation. “Well-temperate” phage from our title is characterized by the best long-term population growth rate. We show that it is realized when the lysogenization frequency is approximately equal to the probability of lytic population collapse. We further predict the existence of sharp boundaries in system’s environmental, ecological, and biophysicalmore » parameters separating the regions where this temperate strategy is optimal from those dominated by purely virulent or dormant (purely lysogenic) strategies. We show that the virulent strategy works best for phages with large diversity of hosts, and access to multiple independent environments reachable by diffusion. Conversely, progressively more temperate or even dormant strategies are favored in the environments, that are subject to frequent and severe temporal downturns.« less
A chaos wolf optimization algorithm with self-adaptive variable step-size
NASA Astrophysics Data System (ADS)
Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun
2017-10-01
To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
NASA Astrophysics Data System (ADS)
Zhang, Min; Yang, Feng; Zhang, Dongqing; Tang, Pengcheng
2018-02-01
A large number of electric vehicles are connected to the family micro grid will affect the operation safety of the power grid and the quality of power. Considering the factors of family micro grid price and electric vehicle as a distributed energy storage device, a two stage optimization model is established, and the improved discrete binary particle swarm optimization algorithm is used to optimize the parameters in the model. The proposed control strategy of electric vehicle charging and discharging is of practical significance for the rational control of electric vehicle as a distributed energy storage device and electric vehicle participating in the peak load regulation of power consumption.
Efficient receiver tuning using differential evolution strategies
NASA Astrophysics Data System (ADS)
Wheeler, Caleb H.; Toland, Trevor G.
2016-08-01
Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.
Design of underwater robot lines based on a hybrid automatic optimization strategy
NASA Astrophysics Data System (ADS)
Lyu, Wenjing; Luo, Weilin
2014-09-01
In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.
Design optimization of a prescribed vibration system using conjoint value analysis
NASA Astrophysics Data System (ADS)
Malinga, Bongani; Buckner, Gregory D.
2016-12-01
This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an
2014-01-01
To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems. PMID:24574905
A novel harmony search algorithm based on teaching-learning strategies for 0-1 knapsack problems.
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an
2014-01-01
To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems.
The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View
NASA Technical Reports Server (NTRS)
Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.
2017-01-01
Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.
NASA Astrophysics Data System (ADS)
Shah, Rahul H.
Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.
Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan
2017-12-01
This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.
Data-driven optimal binning for respiratory motion management in PET.
Kesner, Adam L; Meier, Joseph G; Burckhardt, Darrell D; Schwartz, Jazmin; Lynch, David A
2018-01-01
Respiratory gating has been used in PET imaging to reduce the amount of image blurring caused by patient motion. Optimal binning is an approach for using the motion-characterized data by binning it into a single, easy to understand/use, optimal bin. To date, optimal binning protocols have utilized externally driven motion characterization strategies that have been tuned with population-derived assumptions and parameters. In this work, we are proposing a new strategy with which to characterize motion directly from a patient's gated scan, and use that signal to create a patient/instance-specific optimal bin image. Two hundred and nineteen phase-gated FDG PET scans, acquired using data-driven gating as described previously, were used as the input for this study. For each scan, a phase-amplitude motion characterization was generated and normalized using principle component analysis. A patient-specific "optimal bin" window was derived using this characterization, via methods that mirror traditional optimal window binning strategies. The resulting optimal bin images were validated by correlating quantitative and qualitative measurements in the population of PET scans. In 53% (n = 115) of the image population, the optimal bin was determined to include 100% of the image statistics. In the remaining images, the optimal binning windows averaged 60% of the statistics and ranged between 20% and 90%. Tuning the algorithm, through a single acceptance window parameter, allowed for adjustments of the algorithm's performance in the population toward conservation of motion or reduced noise-enabling users to incorporate their definition of optimal. In the population of images that were deemed appropriate for segregation, average lesion SUV max were 7.9, 8.5, and 9.0 for nongated images, optimal bin, and gated images, respectively. The Pearson correlation of FWHM measurements between optimal bin images and gated images were better than with nongated images, 0.89 and 0.85, respectively. Generally, optimal bin images had better resolution than the nongated images and better noise characteristics than the gated images. We extended the concept of optimal binning to a data-driven form, updating a traditionally one-size-fits-all approach to a conformal one that supports adaptive imaging. This automated strategy was implemented easily within a large population and encapsulated motion information in an easy to use 3D image. Its simplicity and practicality may make this, or similar approaches ideal for use in clinical settings. © 2017 American Association of Physicists in Medicine.
Optimal control of anthracnose using mixed strategies.
Fotsa Mbogne, David Jaures; Thron, Christopher
2015-11-01
In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Pan; Zhang, Yi; Yan, Dong
2018-05-01
Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.
Combined control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.
1989-01-01
An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Optimization of wearable microwave antenna with simplified electromagnetic model of the human body
NASA Astrophysics Data System (ADS)
Januszkiewicz, Łukasz; Barba, Paolo Di; Hausman, Sławomir
2017-12-01
In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user's body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body.
Yao, Xiaojun; Zhang, Xiaoyun; Zhang, Ruisheng; Liu, Mancang; Hu, Zhide; Fan, Botao
2002-05-16
A new method for the prediction of retention indices for a diverse set of compounds from their physicochemical parameters has been proposed. The two used input parameters for representing molecular properties are boiling point and molar volume. Models relating relationships between physicochemical parameters and retention indices of compounds are constructed by means of radial basis function neural networks. To get the best prediction results, some strategies are also employed to optimize the topology and learning parameters of the RBFNNs. For the test set, a predictive correlation coefficient R=0.9910 and root mean squared error of 14.1 are obtained. Results show that radial basis function networks can give satisfactory prediction ability and its optimization is less-time consuming and easy to implement.
RTDS implementation of an improved sliding mode based inverter controller for PV system.
Islam, Gazi; Muyeen, S M; Al-Durra, Ahmed; Hasanien, Hany M
2016-05-01
This paper proposes a novel approach for testing dynamics and control aspects of a large scale photovoltaic (PV) system in real time along with resolving design hindrances of controller parameters using Real Time Digital Simulator (RTDS). In general, the harmonic profile of a fast controller has wide distribution due to the large bandwidth of the controller. The major contribution of this paper is that the proposed control strategy gives an improved voltage harmonic profile and distribute it more around the switching frequency along with fast transient response; filter design, thus, becomes easier. The implementation of a control strategy with high bandwidth in small time steps of Real Time Digital Simulator (RTDS) is not straight forward. This paper shows a good methodology for the practitioners to implement such control scheme in RTDS. As a part of the industrial process, the controller parameters are optimized using particle swarm optimization (PSO) technique to improve the low voltage ride through (LVRT) performance under network disturbance. The response surface methodology (RSM) is well adapted to build analytical models for recovery time (Rt), maximum percentage overshoot (MPOS), settling time (Ts), and steady state error (Ess) of the voltage profile immediate after inverter under disturbance. A systematic approach of controller parameter optimization is detailed. The transient performance of the PSO based optimization method applied to the proposed sliding mode controlled PV inverter is compared with the results from genetic algorithm (GA) based optimization technique. The reported real time implementation challenges and controller optimization procedure are applicable to other control applications in the field of renewable and distributed generation systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Cost-effectiveness of angiographic imaging in isolated perimesencephalic subarachnoid hemorrhage.
Kalra, Vivek B; Wu, Xiao; Forman, Howard P; Malhotra, Ajay
2014-12-01
The purpose of this study is to perform a comprehensive cost-effectiveness analysis of all possible permutations of computed tomographic angiography (CTA) and digital subtraction angiography imaging strategies for both initial diagnosis and follow-up imaging in patients with perimesencephalic subarachnoid hemorrhage on noncontrast CT. Each possible imaging strategy was evaluated in a decision tree created with TreeAge Pro Suite 2014, with parameters derived from a meta-analysis of 40 studies and literature values. Base case and sensitivity analyses were performed to assess the cost-effectiveness of each strategy. A Monte Carlo simulation was conducted with distributional variables to evaluate the robustness of the optimal strategy. The base case scenario showed performing initial CTA with no follow-up angiographic studies in patients with perimesencephalic subarachnoid hemorrhage to be the most cost-effective strategy ($5422/quality adjusted life year). Using a willingness-to-pay threshold of $50 000/quality adjusted life year, the most cost-effective strategy based on net monetary benefit is CTA with no follow-up when the sensitivity of initial CTA is >97.9%, and CTA with CTA follow-up otherwise. The Monte Carlo simulation reported CTA with no follow-up to be the optimal strategy at willingness-to-pay of $50 000 in 99.99% of the iterations. Digital subtraction angiography, whether at initial diagnosis or as part of follow-up imaging, is never the optimal strategy in our model. CTA without follow-up imaging is the optimal strategy for evaluation of patients with perimesencephalic subarachnoid hemorrhage when modern CT scanners and a strict definition of perimesencephalic subarachnoid hemorrhage are used. Digital subtraction angiography and follow-up imaging are not optimal as they carry complications and associated costs. © 2014 American Heart Association, Inc.
Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M; Chattopadhyay, Joydev; Sarkar, Ram Rup
2017-01-01
Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
Intelligent reservoir operation system based on evolving artificial neural networks
NASA Astrophysics Data System (ADS)
Chaves, Paulo; Chang, Fi-John
2008-06-01
We propose a novel intelligent reservoir operation system based on an evolving artificial neural network (ANN). Evolving means the parameters of the ANN model are identified by the GA evolutionary optimization technique. Accordingly, the ANN model should represent the operational strategies of reservoir operation. The main advantages of the Evolving ANN Intelligent System (ENNIS) are as follows: (i) only a small number of parameters to be optimized even for long optimization horizons, (ii) easy to handle multiple decision variables, and (iii) the straightforward combination of the operation model with other prediction models. The developed intelligent system was applied to the operation of the Shihmen Reservoir in North Taiwan, to investigate its applicability and practicability. The proposed method is first built to a simple formulation for the operation of the Shihmen Reservoir, with single objective and single decision. Its results were compared to those obtained by dynamic programming. The constructed network proved to be a good operational strategy. The method was then built and applied to the reservoir with multiple (five) decision variables. The results demonstrated that the developed evolving neural networks improved the operation performance of the reservoir when compared to its current operational strategy. The system was capable of successfully simultaneously handling various decision variables and provided reasonable and suitable decisions.
Varanasi, Jhansi L; Sinha, Pallavi; Das, Debabrata
2017-05-01
To selectively enrich an electrogenic mixed consortium capable of utilizing dark fermentative effluents as substrates in microbial fuel cells and to further enhance the power outputs by optimization of influential anodic operational parameters. A maximum power density of 1.4 W/m 3 was obtained by an enriched mixed electrogenic consortium in microbial fuel cells using acetate as substrate. This was further increased to 5.43 W/m 3 by optimization of influential anodic parameters. By utilizing dark fermentative effluents as substrates, the maximum power densities ranged from 5.2 to 6.2 W/m 3 with an average COD removal efficiency of 75% and a columbic efficiency of 10.6%. A simple strategy is provided for selective enrichment of electrogenic bacteria that can be used in microbial fuel cells for generating power from various dark fermentative effluents.
NASA Astrophysics Data System (ADS)
Blöcher, Johanna; Kuraz, Michal
2017-04-01
In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
Sherwood, Carly A; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A; Martin, Daniel B
2009-07-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition.
Sherwood, Carly A.; Eastham, Ashley; Lee, Lik Wee; Risler, Jenni; Mirzaei, Hamid; Falkner, Jayson A.; Martin, Daniel B.
2009-01-01
Multiple reaction monitoring (MRM) is a highly sensitive method of targeted mass spectrometry (MS) that can be used to selectively detect and quantify peptides based on the screening of specified precursor peptide-to-fragment ion transitions. MRM-MS sensitivity depends critically on the tuning of instrument parameters, such as collision energy and cone voltage, for the generation of maximal product ion signal. Although generalized equations and values exist for such instrument parameters, there is no clear indication that optimal signal can be reliably produced for all types of MRM transitions using such an algorithmic approach. To address this issue, we have devised a workflow functional on both Waters Quattro Premier and ABI 4000 QTRAP triple quadrupole instruments that allows rapid determination of the optimal value of any programmable instrument parameter for each MRM transition. Here, we demonstrate the strategy for the optimizations of collision energy and cone voltage, but the method could be applied to other instrument parameters, such as declustering potential, as well. The workflow makes use of the incremental adjustment of the precursor and product m/z values at the hundredth decimal place to create a series of MRM targets at different collision energies that can be cycled through in rapid succession within a single run, avoiding any run-to-run variability in execution or comparison. Results are easily visualized and quantified using the MRM software package Mr. M to determine the optimal instrument parameters for each transition. PMID:19405522
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
Siriwardena-Mahanama, Buddhima N.; Allen, Matthew J.
2013-01-01
This review describes recent advances in strategies for tuning the water-exchange rates of contrast agents for magnetic resonance imaging (MRI). Water-exchange rates play a critical role in determining the efficiency of contrast agents; consequently, optimization of water-exchange rates, among other parameters, is necessary to achieve high efficiencies. This need has resulted in extensive research efforts to modulate water-exchange rates by chemically altering the coordination environments of the metal complexes that function as contrast agents. The focus of this review is coordination-chemistry-based strategies used to tune the water-exchange rates of lanthanide(III)-based contrast agents for MRI. Emphasis will be given to results published in the 21st century, as well as implications of these strategies on the design of contrast agents. PMID:23921796
Figueroa-Torres, Gonzalo M; Pittman, Jon K; Theodoropoulos, Constantinos
2017-10-01
Microalgal starch and lipids, carbon-based storage molecules, are useful as potential biofuel feedstocks. In this work, cultivation strategies maximising starch and lipid formation were established by developing a multi-parameter kinetic model describing microalgal growth as well as starch and lipid formation, in conjunction with laboratory-scale experiments. Growth dynamics are driven by nitrogen-limited mixotrophic conditions, known to increase cellular starch and lipid contents whilst enhancing biomass growth. Model parameters were computed by fitting model outputs to a range of experimental datasets from batch cultures of Chlamydomonas reinhardtii. Predictive capabilities of the model were established against different experimental data. The model was subsequently used to compute optimal nutrient-based cultivation strategies in terms of initial nitrogen and carbon concentrations. Model-based optimal strategies yielded a significant increase of 261% for starch (0.065gCL -1 ) and 66% for lipid (0.08gCL -1 ) production compared to base-case conditions (0.018gCL -1 starch, 0.048gCL -1 lipids). Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fyta, Maria; Netz, Roland R.
2012-03-01
Using molecular dynamics (MD) simulations in conjunction with the SPC/E water model, we optimize ionic force-field parameters for seven different halide and alkali ions, considering a total of eight ion-pairs. Our strategy is based on simultaneous optimizing single-ion and ion-pair properties, i.e., we first fix ion-water parameters based on single-ion solvation free energies, and in a second step determine the cation-anion interaction parameters (traditionally given by mixing or combination rules) based on the Kirkwood-Buff theory without modification of the ion-water interaction parameters. In doing so, we have introduced scaling factors for the cation-anion Lennard-Jones (LJ) interaction that quantify deviations from the standard mixing rules. For the rather size-symmetric salt solutions involving bromide and chloride ions, the standard mixing rules work fine. On the other hand, for the iodide and fluoride solutions, corresponding to the largest and smallest anion considered in this work, a rescaling of the mixing rules was necessary. For iodide, the experimental activities suggest more tightly bound ion pairing than given by the standard mixing rules, which is achieved in simulations by reducing the scaling factor of the cation-anion LJ energy. For fluoride, the situation is different and the simulations show too large attraction between fluoride and cations when compared with experimental data. For NaF, the situation can be rectified by increasing the cation-anion LJ energy. For KF, it proves necessary to increase the effective cation-anion Lennard-Jones diameter. The optimization strategy outlined in this work can be easily adapted to different kinds of ions.
USDA-ARS?s Scientific Manuscript database
Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...
Fractal profit landscape of the stock market.
Grönlund, Andreas; Yi, Il Gu; Kim, Beom Jun
2012-01-01
We investigate the structure of the profit landscape obtained from the most basic, fluctuation based, trading strategy applied for the daily stock price data. The strategy is parameterized by only two variables, p and q Stocks are sold and bought if the log return is bigger than p and less than -q, respectively. Repetition of this simple strategy for a long time gives the profit defined in the underlying two-dimensional parameter space of p and q. It is revealed that the local maxima in the profit landscape are spread in the form of a fractal structure. The fractal structure implies that successful strategies are not localized to any region of the profit landscape and are neither spaced evenly throughout the profit landscape, which makes the optimization notoriously hard and hypersensitive for partial or limited information. The concrete implication of this property is demonstrated by showing that optimization of one stock for future values or other stocks renders worse profit than a strategy that ignores fluctuations, i.e., a long-term buy-and-hold strategy.
NASA Astrophysics Data System (ADS)
Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei
2017-04-01
The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Optimal experimental design for parameter estimation of a cell signaling model.
Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias
2009-11-01
Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization
NASA Astrophysics Data System (ADS)
Alekseev, G. V.
2018-04-01
For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.
The topography of the environment alters the optimal search strategy for active particles
Volpe, Giovanni
2017-01-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher’s size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers’ taxis to biochemical rates of reaction. PMID:29073055
The topography of the environment alters the optimal search strategy for active particles
NASA Astrophysics Data System (ADS)
Volpe, Giorgio; Volpe, Giovanni
2017-10-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher's size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers' taxis to biochemical rates of reaction.
Quantum approximate optimization algorithm for MaxCut: A fermionic view
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.
2018-02-01
Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028;
Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng
2014-12-30
This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.
NASA Astrophysics Data System (ADS)
Ream, Allen E.; Slattery, John C.; Cizmas, Paul G. A.
2018-04-01
This paper presents a new method for determining the Arrhenius parameters of a reduced chemical mechanism such that it satisfies the second law of thermodynamics. The strategy is to approximate the progress of each reaction in the reduced mechanism from the species production rates of a detailed mechanism by using a linear least squares method. A series of non-linear least squares curve fittings are then carried out to find the optimal Arrhenius parameters for each reaction. At this step, the molar rates of production are written such that they comply with a theorem that provides the sufficient conditions for satisfying the second law of thermodynamics. This methodology was used to modify the Arrhenius parameters for the Westbrook and Dryer two-step mechanism and the Peters and Williams three-step mechanism for methane combustion. Both optimized mechanisms showed good agreement with the detailed mechanism for species mole fractions and production rates of most major species. Both optimized mechanisms showed significant improvement over previous mechanisms in minor species production rate prediction. Both optimized mechanisms produced no violations of the second law of thermodynamics.
NASA Astrophysics Data System (ADS)
Meng, Fei; Tao, Gang; Zhang, Tao; Hu, Yihuai; Geng, Peng
2015-08-01
Shifting quality is a crucial factor in all parts of the automobile industry. To ensure an optimal gear shifting strategy with best fuel economy for a stepped automatic transmission, the controller should be designed to meet the challenge of lacking of a feedback sensor to measure the relevant variables. This paper focuses on a new kind of automatic transmission using proportional solenoid valve to control the clutch pressure, a speed difference of the clutch based control strategy is designed for the shift control during the inertia phase. First, the mechanical system is shown and the system dynamic model is built. Second, the control strategy is designed based on the characterization analysis of models which are derived from dynamics of the drive line and electro-hydraulic actuator. Then, the controller uses conventional Proportional-Integral-Derivative control theory, and a robust two-degree-of-freedom controller is also carried out to determine the optimal control parameters to further improve the system performance. Finally, the designed control strategy with different controller is implemented on a simulation model. The compared results show that the speed difference of clutch can track the desired trajectory well and improve the shift quality effectively.
Long range personalized cancer treatment strategies incorporating evolutionary dynamics.
Yeang, Chen-Hsiang; Beckman, Robert A
2016-10-22
Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.
Exploring the quantum speed limit with computer games
NASA Astrophysics Data System (ADS)
Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.
2016-04-01
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Exploring the quantum speed limit with computer games.
Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F
2016-04-14
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Making do with less: Must sparse data preclude informed harvest strategies for European waterbirds?
Johnson, Fred A.; Alhainen, Mikko; Fox, Anthony D.; Madsen, Jesper; Guillemain, Matthieu
2018-01-01
The demography of many European waterbirds is not well understood because most countries have conducted little monitoring and assessment, and coordination among countries on waterbird management has little precedent. Yet intergovernmental treaties now mandate the use of sustainable, adaptive harvest strategies, whose development is challenged by a paucity of demographic information. In this study, we explore how a combination of allometric relationships, fragmentary monitoring and research information, and expert judgment can be used to estimate the parameters of a theta-logistic population model, which in turn can be used in a Markov decision process to derive optimal harvesting strategies. We show how to account for considerable parametric uncertainty, as well as for different management objectives. We illustrate our methodology with a poorly understood population of taiga bean geese (Anser fabalis fabalis), which is a popular game bird in Fennoscandia. Our results for taiga bean geese suggest that they may have demographic rates similar to other, well-studied species of geese, and our model-based predictions of population size are consistent with the limited monitoring information available. Importantly, we found that by using a Markov decision process, a simple scalar population model may be sufficient to guide harvest management of this species, even if its demography is age-structured. Finally, we demonstrated how two different management objectives can lead to very different optimal harvesting strategies, and how conflicting objectives may be traded off with each other. This approach will have broad application for European waterbirds by providing preliminary estimates of key demographic parameters, by providing insights into the monitoring and research activities needed to corroborate those estimates, and by producing harvest management strategies that are optimal with respect to the managers’ objectives, options, and available demographic information.
Web malware spread modelling and optimal control strategies
NASA Astrophysics Data System (ADS)
Liu, Wanping; Zhong, Shouming
2017-02-01
The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.
Web malware spread modelling and optimal control strategies.
Liu, Wanping; Zhong, Shouming
2017-02-10
The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice.
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Web malware spread modelling and optimal control strategies
Liu, Wanping; Zhong, Shouming
2017-01-01
The popularity of the Web improves the growth of web threats. Formulating mathematical models for accurate prediction of malicious propagation over networks is of great importance. The aim of this paper is to understand the propagation mechanisms of web malware and the impact of human intervention on the spread of malicious hyperlinks. Considering the characteristics of web malware, a new differential epidemic model which extends the traditional SIR model by adding another delitescent compartment is proposed to address the spreading behavior of malicious links over networks. The spreading threshold of the model system is calculated, and the dynamics of the model is theoretically analyzed. Moreover, the optimal control theory is employed to study malware immunization strategies, aiming to keep the total economic loss of security investment and infection loss as low as possible. The existence and uniqueness of the results concerning the optimality system are confirmed. Finally, numerical simulations show that the spread of malware links can be controlled effectively with proper control strategy of specific parameter choice. PMID:28186203
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-01-01
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115
Aerodynamics as a subway design parameter
NASA Technical Reports Server (NTRS)
Kurtz, D. W.
1976-01-01
A parametric sensitivity study has been performed on the system operational energy requirement in order to guide subway design strategy. Aerodynamics can play a dominant or trivial role, depending upon the system characteristics. Optimization of the aerodynamic parameters may not minimize the total operational energy. Isolation of the station box from the tunnel and reduction of the inertial power requirements pay the largest dividends in terms of the operational energy requirement.
Bifurcation analysis of eight coupled degenerate optical parametric oscillators
NASA Astrophysics Data System (ADS)
Ito, Daisuke; Ueta, Tetsushi; Aihara, Kazuyuki
2018-06-01
A degenerate optical parametric oscillator (DOPO) network realized as a coherent Ising machine can be used to solve combinatorial optimization problems. Both theoretical and experimental investigations into the performance of DOPO networks have been presented previously. However a problem remains, namely that the dynamics of the DOPO network itself can lower the search success rates of globally optimal solutions for Ising problems. This paper shows that the problem is caused by pitchfork bifurcations due to the symmetry structure of coupled DOPOs. Some two-parameter bifurcation diagrams of equilibrium points express the performance deterioration. It is shown that the emergence of non-ground states regarding local minima hampers the system from reaching the ground states corresponding to the global minimum. We then describe a parametric strategy for leading a system to the ground state by actively utilizing the bifurcation phenomena. By adjusting the parameters to break particular symmetry, we find appropriate parameter sets that allow the coherent Ising machine to obtain the globally optimal solution alone.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
A computational approach to animal breeding.
Berger-Wolf, Tanya Y; Moore, Cristopher; Saia, Jared
2007-02-07
We propose a computational model of mating strategies for controlled animal breeding programs. A mating strategy in a controlled breeding program is a heuristic with some optimization criteria as a goal. Thus, it is appropriate to use the computational tools available for analysis of optimization heuristics. In this paper, we propose the first discrete model of the controlled animal breeding problem and analyse heuristics for two possible objectives: (1) breeding for maximum diversity and (2) breeding a target individual. These two goals are representative of conservation biology and agricultural livestock management, respectively. We evaluate several mating strategies and provide upper and lower bounds for the expected number of matings. While the population parameters may vary and can change the actual number of matings for a particular strategy, the order of magnitude of the number of expected matings and the relative competitiveness of the mating heuristics remains the same. Thus, our simple discrete model of the animal breeding problem provides a novel viable and robust approach to designing and comparing breeding strategies in captive populations.
Song, Xianzhi; Peng, Chi; Li, Gensheng; He, Zhenguo; Wang, Haizhu
2016-01-01
Sand production and blockage are common during the drilling and production of horizontal oil and gas wells as a result of formation breakdown. The use of high-pressure rotating jets and annular helical flow is an effective way to enhance horizontal wellbore cleanout. In this paper, we propose the idea of using supercritical CO2 (SC-CO2) as washing fluid in water-sensitive formation. SC-CO2 is manifested to be effective in preventing formation damage and enhancing production rate as drilling fluid, which justifies tis potential in wellbore cleanout. In order to investigate the effectiveness of SC-CO2 helical flow cleanout, we perform the numerical study on the annular flow field, which significantly affects sand cleanout efficiency, of SC-CO2 jets in horizontal wellbore. Based on the field data, the geometry model and mathematical models were built. Then a numerical simulation of the annular helical flow field by SC-CO2 jets was accomplished. The influences of several key parameters were investigated, and SC-CO2 jets were compared to conventional water jets. The results show that flow rate, ambient temperature, jet temperature, and nozzle assemblies play the most important roles on wellbore flow field. Once the difference between ambient temperatures and jet temperatures is kept constant, the wellbore velocity distributions will not change. With increasing lateral nozzle size or decreasing rear/forward nozzle size, suspending ability of SC-CO2 flow improves obviously. A back-propagation artificial neural network (BP-ANN) was successfully employed to match the operation parameters and SC-CO2 flow velocities. A comprehensive model was achieved to optimize the operation parameters according to two strategies: cost-saving strategy and local optimal strategy. This paper can help to understand the distinct characteristics of SC-CO2 flow. And it is the first time that the BP-ANN is introduced to analyze the flow field during wellbore cleanout in horizontal wells.
Song, Xianzhi; Peng, Chi; Li, Gensheng
2016-01-01
Sand production and blockage are common during the drilling and production of horizontal oil and gas wells as a result of formation breakdown. The use of high-pressure rotating jets and annular helical flow is an effective way to enhance horizontal wellbore cleanout. In this paper, we propose the idea of using supercritical CO2 (SC-CO2) as washing fluid in water-sensitive formation. SC-CO2 is manifested to be effective in preventing formation damage and enhancing production rate as drilling fluid, which justifies tis potential in wellbore cleanout. In order to investigate the effectiveness of SC-CO2 helical flow cleanout, we perform the numerical study on the annular flow field, which significantly affects sand cleanout efficiency, of SC-CO2 jets in horizontal wellbore. Based on the field data, the geometry model and mathematical models were built. Then a numerical simulation of the annular helical flow field by SC-CO2 jets was accomplished. The influences of several key parameters were investigated, and SC-CO2 jets were compared to conventional water jets. The results show that flow rate, ambient temperature, jet temperature, and nozzle assemblies play the most important roles on wellbore flow field. Once the difference between ambient temperatures and jet temperatures is kept constant, the wellbore velocity distributions will not change. With increasing lateral nozzle size or decreasing rear/forward nozzle size, suspending ability of SC-CO2 flow improves obviously. A back-propagation artificial neural network (BP-ANN) was successfully employed to match the operation parameters and SC-CO2 flow velocities. A comprehensive model was achieved to optimize the operation parameters according to two strategies: cost-saving strategy and local optimal strategy. This paper can help to understand the distinct characteristics of SC-CO2 flow. And it is the first time that the BP-ANN is introduced to analyze the flow field during wellbore cleanout in horizontal wells. PMID:27249026
Lochmatter, Samuel; Holliger, Christof
2014-08-01
The transformation of conventional flocculent sludge to aerobic granular sludge (AGS) biologically removing carbon, nitrogen and phosphorus (COD, N, P) is still a main challenge in startup of AGS sequencing batch reactors (AGS-SBRs). On the one hand a rapid granulation is desired, on the other hand good biological nutrient removal capacities have to be maintained. So far, several operation parameters have been studied separately, which makes it difficult to compare their impacts. We investigated seven operation parameters in parallel by applying a Plackett-Burman experimental design approach with the aim to propose an optimized startup strategy. Five out of the seven tested parameters had a significant impact on the startup duration. The conditions identified to allow a rapid startup of AGS-SBRs with good nutrient removal performances were (i) alternation of high and low dissolved oxygen phases during aeration, (ii) a settling strategy avoiding too high biomass washout during the first weeks of reactor operation, (iii) adaptation of the contaminant load in the early stage of the startup in order to ensure that all soluble COD was consumed before the beginning of the aeration phase, (iv) a temperature of 20 °C, and (v) a neutral pH. Under such conditions, it took less than 30 days to produce granular sludge with high removal performances for COD, N, and P. A control run using this optimized startup strategy produced again AGS with good nutrient removal performances within four weeks and the system was stable during the additional operation period of more than 50 days. Copyright © 2014 Elsevier Ltd. All rights reserved.
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
a Region-Based Multi-Scale Approach for Object-Based Image Analysis
NASA Astrophysics Data System (ADS)
Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.
2016-06-01
Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
Enteral nutrition for optimal growth in preterm infants
2016-01-01
Early, aggressive nutrition is an important contributing factor of long-term neurodevelopmental outcomes. To ensure optimal growth in premature infants, adequate protein intake and optimal protein/energy ratio should be emphasized rather than the overall energy intake. Minimal enteral nutrition should be initiated as soon as possible in the first days of life, and feeding advancement should be individualized according to the clinical course of the infant. During hospitalization, enteral nutrition with preterm formula and fortified human milk represent the best feeding practices for facilitating growth. After discharge, the enteral nutrition strategy should be individualized according to the infant's weight at discharge. Infants with suboptimal weight for their postconceptional age at discharge should receive supplementation with human milk fortifiers or nutrient-enriched feeding, and the enteral nutrition strategy should be reviewed and modified continuously to achieve the target growth parameters. PMID:28194211
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Pollalis, Yannis A.
2012-12-01
In this paper, optimal environmental policy for reclamation of land unearthed in lignite mines is defined as a strategic target. The tactics concerning the achievement of this target, includes estimation of optimal time lag between each lignite site (which is a segment of the whole lignite field) complete exploitation and its reclamation. Subsidizing of reclamation has been determined as a function of this time lag and relevant implementation is presented for parameter values valid for the Greek economy. We proved that the methodology we have developed gives reasonable quantitative results within the norms imposed by legislation. Moreover, the interconnection between strategy and tactics becomes evident, since the former causes the latter by deduction and the latter revises the former by induction in the time course of land reclamation.
Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica
2016-09-01
Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.
Parameters of Institutional Change: Chicano Experience in Education.
ERIC Educational Resources Information Center
Santana, Ray; And Others
During the 1960's, the Chicano movement directed considerable attention, energy, and resources toward educational change. The predominant mood was optimism and anticipation of major institutional change; the predominant tactic used was militant confrontation. Countless confrontations occurred and numerous plans and strategies for educational…
Feature selection with harmony search.
Diao, Ren; Shen, Qiang
2012-12-01
Many search strategies have been exploited for the task of feature selection (FS), in an effort to identify more compact and better quality subsets. Such work typically involves the use of greedy hill climbing (HC), or nature-inspired heuristics, in order to discover the optimal solution without going through exhaustive search. In this paper, a novel FS approach based on harmony search (HS) is presented. It is a general approach that can be used in conjunction with many subset evaluation techniques. The simplicity of HS is exploited to reduce the overall complexity of the search process. The proposed approach is able to escape from local solutions and identify multiple solutions owing to the stochastic nature of HS. Additional parameter control schemes are introduced to reduce the effort and impact of parameter configuration. These can be further combined with the iterative refinement strategy, tailored to enforce the discovery of quality subsets. The resulting approach is compared with those that rely on HC, genetic algorithms, and particle swarm optimization, accompanied by in-depth studies of the suggested improvements.
Inverse planning in the age of digital LINACs: station parameter optimized radiation therapy (SPORT)
NASA Astrophysics Data System (ADS)
Xing, Lei; Li, Ruijiang
2014-03-01
The last few years have seen a number of technical and clinical advances which give rise to a need for innovations in dose optimization and delivery strategies. Technically, a new generation of digital linac has become available which offers features such as programmable motion between station parameters and high dose-rate Flattening Filter Free (FFF) beams. Current inverse planning methods are designed for traditional machines and cannot accommodate these features of new generation linacs without compromising either dose conformality and/or delivery efficiency. Furthermore, SBRT is becoming increasingly important, which elevates the need for more efficient delivery, improved dose distribution. Here we will give an overview of our recent work in SPORT designed to harness the digital linacs and highlight the essential components of SPORT. We will summarize the pros and cons of traditional beamlet-based optimization (BBO) and direct aperture optimization (DAO) and introduce a new type of algorithm, compressed sensing (CS)-based inverse planning, that is capable of automatically removing the redundant segments during optimization and providing a plan with high deliverability in the presence of a large number of station control points (potentially non-coplanar, non-isocentric, and even multi-isocenters). We show that CS-approach takes the interplay between planning and delivery into account and allows us to balance the dose optimality and delivery efficiency in a controlled way and, providing a viable framework to address various unmet demands of the new generation linacs. A few specific implementation strategies of SPORT in the forms of fixed-gantry and rotational arc delivery are also presented.
Assessing predation risk: optimal behaviour and rules of thumb.
Welton, Nicky J; McNamara, John M; Houston, Alasdair I
2003-12-01
We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.
Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-01-01
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282
Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-07-18
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.
NASA Astrophysics Data System (ADS)
Bean, Glenn E.; Witkin, David B.; McLouth, Tait D.; Zaldivar, Rafael J.
2018-02-01
Research on the selective laser melting (SLM) method of laser powder bed fusion additive manufacturing (AM) has shown that surface and internal quality of AM parts is directly related to machine settings such as laser energy density, scanning strategies, and atmosphere. To optimize laser parameters for improved component quality, the energy density is typically controlled via laser power, scanning rate, and scanning strategy, but can also be controlled by changing the spot size via laser focal plane shift. Present work being conducted by The Aerospace Corporation was initiated after observing inconsistent build quality of parts printed using OEM-installed settings. Initial builds of Inconel 718 witness geometries using OEM laser parameters were evaluated for surface roughness, density, and porosity while varying energy density via laser focus shift. Based on these results, hardware and laser parameter adjustments were conducted in order to improve build quality and consistency. Tensile testing was also conducted to investigate the effect of build plate location and laser settings on SLM 718. This work has provided insight into the limitations of OEM parameters compared with optimized parameters towards the goal of manufacturing aerospace-grade parts, and has led to the development of a methodology for laser parameter tuning that can be applied to other alloy systems. Additionally, evidence was found that for 718, which derives its strength from post-manufacturing heat treatment, there is a possibility that tensile testing may not be perceptive to defects which would reduce component performance. Ongoing research is being conducted towards identifying appropriate testing and analysis methods for screening and quality assurance.
Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel
2017-01-01
We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.
Hogiri, Tomoharu; Tamashima, Hiroshi; Nishizawa, Akitoshi; Okamoto, Masahiro
2018-02-01
To optimize monoclonal antibody (mAb) production in Chinese hamster ovary cell cultures, culture pH should be temporally controlled with high resolution. In this study, we propose a new pH-dependent dynamic model represented by simultaneous differential equations including a minimum of six system component, depending on pH value. All kinetic parameters in the dynamic model were estimated using an evolutionary numerical optimization (real-coded genetic algorithm) method based on experimental time-course data obtained at different pH values ranging from 6.6 to 7.2. We determined an optimal pH-shift schedule theoretically. We validated this optimal pH-shift schedule experimentally and mAb production increased by approximately 40% with this schedule. Throughout this study, it was suggested that the culture pH-shift optimization strategy using a pH-dependent dynamic model is suitable to optimize any pH-shift schedule for CHO cell lines used in mAb production projects. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Fractal Profit Landscape of the Stock Market
Grönlund, Andreas; Yi, Il Gu; Kim, Beom Jun
2012-01-01
We investigate the structure of the profit landscape obtained from the most basic, fluctuation based, trading strategy applied for the daily stock price data. The strategy is parameterized by only two variables, p and q Stocks are sold and bought if the log return is bigger than p and less than –q, respectively. Repetition of this simple strategy for a long time gives the profit defined in the underlying two-dimensional parameter space of p and q. It is revealed that the local maxima in the profit landscape are spread in the form of a fractal structure. The fractal structure implies that successful strategies are not localized to any region of the profit landscape and are neither spaced evenly throughout the profit landscape, which makes the optimization notoriously hard and hypersensitive for partial or limited information. The concrete implication of this property is demonstrated by showing that optimization of one stock for future values or other stocks renders worse profit than a strategy that ignores fluctuations, i.e., a long-term buy-and-hold strategy. PMID:22558079
Impedance learning for robotic contact tasks using natural actor-critic algorithm.
Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul
2010-04-01
Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.
Power Allocation Based on Data Classification in Wireless Sensor Networks
Wang, Houlian; Zhou, Gongbo
2017-01-01
Limited node energy in wireless sensor networks is a crucial factor which affects the monitoring of equipment operation and working conditions in coal mines. In addition, due to heterogeneous nodes and different data acquisition rates, the number of arriving packets in a queue network can differ, which may lead to some queue lengths reaching the maximum value earlier compared with others. In order to tackle these two problems, an optimal power allocation strategy based on classified data is proposed in this paper. Arriving data is classified into dissimilar classes depending on the number of arriving packets. The problem is formulated as a Lyapunov drift optimization with the objective of minimizing the weight sum of average power consumption and average data class. As a result, a suboptimal distributed algorithm without any knowledge of system statistics is presented. The simulations, conducted in the perfect channel state information (CSI) case and the imperfect CSI case, reveal that the utility can be pushed arbitrarily close to optimal by increasing the parameter V, but with a corresponding growth in the average delay, and that other tunable parameters W and the classification method in the interior of utility function can trade power optimality for increased average data class. The above results show that data in a high class has priorities to be processed than data in a low class, and energy consumption can be minimized in this resource allocation strategy. PMID:28498346
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Dühring, Sybille; Ewald, Jan; Germerodt, Sebastian; Kaleta, Christoph; Dandekar, Thomas; Schuster, Stefan
2017-07-01
The release of fungal cells following macrophage phagocytosis, called non-lytic expulsion, is reported for several fungal pathogens. On one hand, non-lytic expulsion may benefit the fungus in escaping the microbicidal environment of the phagosome. On the other hand, the macrophage could profit in terms of avoiding its own lysis and being able to undergo proliferation. To analyse the causes of non-lytic expulsion and the relevance of macrophage proliferation in the macrophage- Candida albicans interaction, we employ Evolutionary Game Theory and dynamic optimization in a sequential manner. We establish a game-theoretical model describing the different strategies of the two players after phagocytosis. Depending on the parameter values, we find four different Nash equilibria and determine the influence of the systems state of the host upon the game. As our Nash equilibria are a direct consequence of the model parameterization, we can depict several biological scenarios. A parameter region, where the host response is robust against the fungal infection, is determined. We further apply dynamic optimization to analyse whether macrophage mitosis is relevant in the host-pathogen interaction of macrophages and C. albicans For this, we study the population dynamics of the macrophage- C. albicans interactions and the corresponding optimal controls for the macrophages, indicating the best macrophage strategy of switching from proliferation to attacking fungal cells. © 2017 The Author(s).
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Fluctuation-driven price dynamics and investment strategies
Li, Yan; Zheng, Bo; Chen, Ting-Ting; Jiang, Xiong-Fei
2017-01-01
Investigation of the driven mechanism of the price dynamics in complex financial systems is important and challenging. In this paper, we propose an investment strategy to study how dynamic fluctuations drive the price movements. The strategy is successfully applied to different stock markets in the world, and the result indicates that the driving effect of the dynamic fluctuations is rather robust. We investigate how the strategy performance is influenced by the market states and optimize the strategy performance by introducing two parameters. The strategy is also compared with several typical technical trading rules. Our findings not only provide an investment strategy which extends investors’ profits, but also offer a useful method to look into the dynamic properties of complex financial systems. PMID:29240783
Fluctuation-driven price dynamics and investment strategies.
Li, Yan; Zheng, Bo; Chen, Ting-Ting; Jiang, Xiong-Fei
2017-01-01
Investigation of the driven mechanism of the price dynamics in complex financial systems is important and challenging. In this paper, we propose an investment strategy to study how dynamic fluctuations drive the price movements. The strategy is successfully applied to different stock markets in the world, and the result indicates that the driving effect of the dynamic fluctuations is rather robust. We investigate how the strategy performance is influenced by the market states and optimize the strategy performance by introducing two parameters. The strategy is also compared with several typical technical trading rules. Our findings not only provide an investment strategy which extends investors' profits, but also offer a useful method to look into the dynamic properties of complex financial systems.
Wirth, Stefan; Meindl, Thomas; Treitl, Marcus; Pfeifer, Klaus-Jürgen; Reiser, Maximilian
2006-08-01
The purpose of this study was to analyze different patient positioning strategies for minimizing artifacts of the shoulder girdle in head and neck CT. Standardized CT examinations of three positioning groups were compared (P: patients pushed their shoulders downwards; D: similar optimization by a pulling device; N: no particular positioning optimization). Parameters analyzed were the length of the cervical spine not being superimposed by the shoulder girdle as well as noise in the supraclavicular space. In groups P and D, the portion of the cervical spine not superimposed was significantly larger than in group N (P: 10.4 cm; D: 10.6 cm; N: 8.5 cm). At the supraclavicular space, noise decreased significantly (P: 12.5 HU; D: 12.1 HU; N: 17.7 HU). No significant differences between the two position-optimized groups (P and D) were detected. Optimized shoulder positioning by the patient increases image quality in CT head and neck imaging. The use of a pulling device offers no additional advantages.
Fredriksson, Mattias J; Petersson, Patrik; Axelsson, Bengt-Olof; Bylund, Dan
2011-10-17
A strategy for rapid optimization of liquid chromatography column temperature and gradient shape is presented. The optimization as such is based on the well established retention and peak width models implemented in software like e.g. DryLab and LC simulator. The novel part of the strategy is a highly automated processing algorithm for detection and tracking of chromatographic peaks in noisy liquid chromatography-mass spectrometry (LC-MS) data. The strategy is presented and visualized by the optimization of the separation of two degradants present in ultraviolet (UV) exposed fluocinolone acetonide. It should be stressed, however, that it can be utilized for LC-MS analysis of any sample and application where several runs are conducted on the same sample. In the application presented, 30 components that were difficult or impossible to detect in the UV data could be automatically detected and tracked in the MS data by using the proposed strategy. The number of correctly tracked components was above 95%. Using the parameters from the reconstructed data sets to the model gave good agreement between predicted and observed retention times at optimal conditions. The area of the smallest tracked component was estimated to 0.08% compared to the main component, a level relevant for the characterization of impurities in the pharmaceutical industry. Copyright © 2011 Elsevier B.V. All rights reserved.
Engineering Parameters in Bioreactor's Design: A Critical Aspect in Tissue Engineering
Amoabediny, Ghassem; Pouran, Behdad; Tabesh, Hadi; Shokrgozar, Mohammad Ali; Haghighipour, Nooshin; Khatibi, Nahid; Mottaghy, Khosrow; Zandieh-Doulabi, Behrouz
2013-01-01
Bioreactors are important inevitable part of any tissue engineering (TE) strategy as they aid the construction of three-dimensional functional tissues. Since the ultimate aim of a bioreactor is to create a biological product, the engineering parameters, for example, internal and external mass transfer, fluid velocity, shear stress, electrical current distribution, and so forth, are worth to be thoroughly investigated. The effects of such engineering parameters on biological cultures have been addressed in only a few preceding studies. Furthermore, it would be highly inefficient to determine the optimal engineering parameters by trial and error method. A solution is provided by emerging modeling and computational tools and by analyzing oxygen, carbon dioxide, and nutrient and metabolism waste material transports, which can simulate and predict the experimental results. Discovering the optimal engineering parameters is crucial not only to reduce the cost and time of experiments, but also to enhance efficacy and functionality of the tissue construct. This review intends to provide an inclusive package of the engineering parameters together with their calculation procedure in addition to the modeling techniques in TE bioreactors. PMID:24000327
Engineering parameters in bioreactor's design: a critical aspect in tissue engineering.
Salehi-Nik, Nasim; Amoabediny, Ghassem; Pouran, Behdad; Tabesh, Hadi; Shokrgozar, Mohammad Ali; Haghighipour, Nooshin; Khatibi, Nahid; Anisi, Fatemeh; Mottaghy, Khosrow; Zandieh-Doulabi, Behrouz
2013-01-01
Bioreactors are important inevitable part of any tissue engineering (TE) strategy as they aid the construction of three-dimensional functional tissues. Since the ultimate aim of a bioreactor is to create a biological product, the engineering parameters, for example, internal and external mass transfer, fluid velocity, shear stress, electrical current distribution, and so forth, are worth to be thoroughly investigated. The effects of such engineering parameters on biological cultures have been addressed in only a few preceding studies. Furthermore, it would be highly inefficient to determine the optimal engineering parameters by trial and error method. A solution is provided by emerging modeling and computational tools and by analyzing oxygen, carbon dioxide, and nutrient and metabolism waste material transports, which can simulate and predict the experimental results. Discovering the optimal engineering parameters is crucial not only to reduce the cost and time of experiments, but also to enhance efficacy and functionality of the tissue construct. This review intends to provide an inclusive package of the engineering parameters together with their calculation procedure in addition to the modeling techniques in TE bioreactors.
A mathematical model on the optimal timing of offspring desertion.
Seno, Hiromi; Endo, Hiromi
2007-06-07
We consider the offspring desertion as the optimal strategy for the deserter parent, analyzing a mathematical model for its expected reproductive success. It is shown that the optimality of the offspring desertion significantly depends on the offsprings' birth timing in the mating season, and on the other ecological parameters characterizing the innate nature of considered animals. Especially, the desertion is less likely to occur for the offsprings born in the later period of mating season. It is also implied that the offspring desertion after a partially biparental care would be observable only with a specific condition.
Peng, Jiansheng; Meng, Fanmei; Ai, Yuncan
2013-06-01
The artificial neural network (ANN) and genetic algorithm (GA) were combined to optimize the fermentation process for enhancing production of marine bacteriocin 1701 in a 5-L-stirred-tank. Fermentation time, pH value, dissolved oxygen level, temperature and turbidity were used to construct a "5-10-1" ANN topology to identify the nonlinear relationship between fermentation parameters and the antibiotic effects (shown as in inhibition diameters) of bacteriocin 1701. The predicted values by the trained ANN model were coincided with the observed ones (the coefficient of R(2) was greater than 0.95). As the fermentation time was brought in as one of the ANN input nodes, fermentation parameters could be optimized by stages through GA, and an optimal fermentation process control trajectory was created. The production of marine bacteriocin 1701 was significantly improved by 26% under the guidance of fermentation control trajectory that was optimized by using of combined ANN-GA method. Copyright © 2013 Elsevier Ltd. All rights reserved.
Process development for the mass production of Ehrlichia ruminantium.
Marcelino, Isabel; Sousa, Marcos F Q; Veríssimo, Célia; Cunha, António E; Carrondo, Manuel J T; Alves, Paula M
2006-03-06
This work describes the optimization of a cost-effective process for the production of an inactivated bacterial vaccine against heartwater and the first attempt to produce the causative agent of this disease, the rickettsia Ehrlichia ruminantium (ER), using stirred tanks. In vitro, it is possible to produce ER using cultures of ruminant endothelial cells. Herein, mass production of these cells was optimized for stirring conditions. The effect of inoculum size, microcarrier type, concentration of serum at inoculation time and agitation rate upon maximum cell concentration were evaluated. Several strategies for the scale-up of cell inoculum were also tested. Afterwards, using the optimized parameters for cell growth, ER production in stirred tanks was validated for two ER strains (Gardel and Welgevonden). Critical parameters related with the infection strategy such as serum concentration at infection time, multiplicity and time of infection, and medium refeed strategy were analyzed. The results indicate that it is possible to produce ER in stirred tank bioreactors, under serum-free culture conditions, reaching a 6.5-fold increase in ER production yields. The suitability of this process was validated up to a 2-l scale and a preliminary cost estimation has shown that the stirred tanks are the least expensive culture method. Overall, these results are crucial to define a scaleable and fully controlled process for the production of a heartwater vaccine and open "new avenues" for the production of vaccines against other ehrlichial species, with emerging impact in human and animal health.
Rand, Miya K; Shimansky, Yury P
2013-03-01
A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.
Optimization Under Uncertainty for Wake Steering Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Annoni, Jennifer; King, Ryan N
Offsetting turbines' yaw orientations from incoming wind is a powerful tool that may be leveraged to reduce undesirable wake effects on downstream turbines. First, we examine a simple two-turbine case to gain intuition as to how inflow direction uncertainty affects the optimal solution. The turbines are modeled with unidirectional inflow such that one turbine directly wakes the other, using ten rotor diameter spacing. We perform optimization under uncertainty (OUU) via a parameter sweep of the front turbine. The OUU solution generally prefers less steering. We then do this optimization for a 60-turbine wind farm with unidirectional inflow, varying the degreemore » of inflow uncertainty and approaching this OUU problem by nesting a polynomial chaos expansion uncertainty quantification routine within an outer optimization. We examined how different levels of uncertainty in the inflow direction effect the ratio of the expected values of deterministic and OUU solutions for steering strategies in the large wind farm, assuming the directional uncertainty used to reach said OUU solution (this ratio is defined as the value of the stochastic solution or VSS).« less
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-03-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
Constraint Optimization Problem For The Cutting Of A Cobalt Chrome Refractory Material
NASA Astrophysics Data System (ADS)
Lebaal, Nadhir; Schlegel, Daniel; Folea, Milena
2011-05-01
This paper shows a complete approach to solve a given problem, from the experimentation to the optimization of different cutting parameters. In response to an industrial problem of slotting FSX 414, a Cobalt-based refractory material, we have implemented a design of experiment to determine the most influent parameters on the tool life, the surface roughness and the cutting forces. After theses trials, an optimization approach has been implemented to find the lowest manufacturing cost while respecting the roughness constraints and cutting force limitation constraints. The optimization approach is based on the Response Surface Method (RSM) using the Sequential Quadratic programming algorithm (SQP) for a constrained problem. To avoid a local optimum and to obtain an accurate solution at low cost, an efficient strategy, which allows improving the RSM accuracy in the vicinity of the global optimum, is presented. With these models and these trials, we could apply and compare our optimization methods in order to get the lowest cost for the best quality, i.e. a satisfying surface roughness and limited cutting forces.
Sin, Wai Jack; Nai, Mui Ling Sharon; Wei, Jun
2017-01-01
As one of the powder bed fusion additive manufacturing technologies, electron beam melting (EBM) is gaining more and more attention due to its near-net-shape production capacity with low residual stress and good mechanical properties. These characteristics also allow EBM built parts to be used as produced without post-processing. However, the as-built rough surface introduces a detrimental influence on the mechanical properties of metallic alloys. Thereafter, understanding the effects of processing parameters on the part’s surface roughness, in turn, becomes critical. This paper has focused on varying the processing parameters of two types of contouring scanning strategies namely, multispot and non-multispot, in EBM. The results suggest that the beam current and speed function are the most significant processing parameters for non-multispot contouring scanning strategy. While for multispot contouring scanning strategy, the number of spots, spot time, and spot overlap have greater effects than focus offset and beam current. The improved surface roughness has been obtained in both contouring scanning strategies. Furthermore, non-multispot contouring scanning strategy gives a lower surface roughness value and poorer geometrical accuracy than the multispot counterpart under the optimized conditions. These findings could be used as a guideline for selecting the contouring type used for specific industrial parts that are built using EBM. PMID:28937638
Postmethod Pedagogy and Its Influence on EFL Teaching Strategies
ERIC Educational Resources Information Center
Chen, Mingyao
2014-01-01
Postmethod pedagogy is first put forward by Kumaravadivelu in 1994. It emerged to respond the demand for a most optimal way of teaching English free from the method-based restrictions. Kumaravadivelu views postmethod pedagogy as a three dimensional system with three pedagogic parameters: particularity, practicality, and possibility; and he…
Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...
Bunch Splitting Simulations for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satogata, Todd J.; Gamage, Randika
2016-05-01
We describe the bunch splitting strategies for the proposed JLEIC ion collider ring at Jefferson Lab. This complex requires an unprecedented 9:6832 bunch splitting, performed in several stages. We outline the problem and current results, optimized with ESME including general parameterization of 1:2 bunch splitting for JLEIC parameters.
Assessment of parameter regionalization methods for modeling flash floods in China
NASA Astrophysics Data System (ADS)
Ragettli, Silvan; Zhou, Jian; Wang, Haijing
2017-04-01
Rainstorm flash floods are a common and serious phenomenon during the summer months in many hilly and mountainous regions of China. For this study, we develop a modeling strategy for simulating flood events in small river basins of four Chinese provinces (Shanxi, Henan, Beijing, Fujian). The presented research is part of preliminary investigations for the development of a national operational model for predicting and forecasting hydrological extremes in basins of size 10 - 2000 km2, whereas most of these basins are ungauged or poorly gauged. The project is supported by the China Institute of Water Resources and Hydropower Research within the framework of the national initiative for flood prediction and early warning system for mountainous regions in China (research project SHZH-IWHR-73). We use the USGS Precipitation-Runoff Modeling System (PRMS) as implemented in the Java modeling framework Object Modeling System (OMS). PRMS can operate at both daily and storm timescales, switching between the two using a precipitation threshold. This functionality allows the model to perform continuous simulations over several years and to switch to the storm mode to simulate storm response in greater detail. The model was set up for fifteen watersheds for which hourly precipitation and runoff data were available. First, automatic calibration based on the Shuffled Complex Evolution method was applied to different hydrological response unit (HRU) configurations. The Nash-Sutcliffe efficiency (NSE) was used as assessment criteria, whereas only runoff data from storm events were considered. HRU configurations reflect the drainage-basin characteristics and depend on assumptions regarding drainage density and minimum HRU size. We then assessed the sensitivity of optimal parameters to different HRU configurations. Finally, the transferability to other watersheds of optimal model parameters that were not sensitive to HRU configurations was evaluated. Model calibration for the 15 catchments resulted in good model performance (NSE > 0.5) in 10 and medium performance (NSE > 0.2) in 3 catchments. Optimal model parameters proofed to be relatively insensitive to different HRU configurations. This suggests that dominant controls on hydrologic parameter transfer can potentially be identified based on catchment attributes describing meteorological, geological or landscape characteristics. Parameter regionalization based on a principal component analysis (PCA) nearest neighbor search (using all available catchment attributes) resulted in a 54% success rate in transferring optimal parameter sets and still yielding acceptable model performance. Data from more catchments are required to further increase the parameter transferability success rate or to develop regionalization strategies for individual parameters.
Robust H∞ control of active vehicle suspension under non-stationary running
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Zhang, Li-Ping
2012-12-01
Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Sunarsih; Kartono
2018-01-01
In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.
Bouvier, Isabelle; Jusforgues-Saklani, Hélène; Lim, Annick; Lemaître, Fabrice; Lemercier, Brigitte; Auriau, Charlotte; Nicola, Marie-Anne; Leroy, Sandrine; Law, Helen K.; Bandeira, Antonio; Moon, James J.; Bousso, Philippe; Albert, Matthew L.
2011-01-01
Delivery of cell-associated antigen represents an important strategy for vaccination. While many experimental models have been developed in order to define the critical parameters for efficient cross-priming, few have utilized quantitative methods that permit the study of the endogenous repertoire. Comparing different strategies of immunization, we report that local delivery of cell-associated antigen results in delayed T cell cross-priming due to the increased time required for antigen capture and presentation. In comparison, delivery of disseminated antigen resulted in rapid T cell priming. Surprisingly, local injection of cell-associated antigen, while slower, resulted in the differentiation of a more robust, polyfunctional, effector response. We also evaluated the combination of cell-associated antigen with poly I:C delivery and observed an immunization route-specific effect regarding the optimal timing of innate immune stimulation. These studies highlight the importance of considering the timing and persistence of antigen presentation, and suggest that intradermal injection with delayed adjuvant delivery is the optimal strategy for achieving CD8+ T cell cross-priming. PMID:22566860
Relativistic and noise effects on multiplayer Prisoners' dilemma with entangling initial states
NASA Astrophysics Data System (ADS)
Goudarzi, H.; Rashidi, S. S.
2017-11-01
Three-players Prisoners' dilemma (Alice, Bob and Colin) is studied in the presence of a single collective environment effect as a noise. The environmental effect is coupled with final states by a particular form of Kraus operators K_0 and K_1 through amplitude damping channel. We introduce the decoherence parameter 0≤p≤1 to the corresponding noise matrices, in order to controling the rate of environment influence on payoff of each players. Also, we consider the Unruh effect on the payoff of player, who is located at a noninertial frame. We suppose that two players (Bob and Colin) are in Rindler region I from Minkowski space-time, and move with same uniform acceleration (r_b=r_c) and frequency mode. The game is begun with the classical strategies cooperation ( C) and defection ( D) accessible to each player. Furthermore, the players are allowed to access the quantum strategic space ( Q and M). The quantum entanglement is coupled with initial classical states by the parameter γ \\in [0,π /2]. Using entangled initial states by exerting an unitary operator \\hat{J} as entangling gate, the quantum game (competition between Prisoners, as a three-qubit system) is started by choosing the strategies from classical or quantum strategic space. Arbitrarily chosen strategy by each player can lead to achieving profiles, which can be considered as Nash equilibrium or Pareto optimal. It is shown that in the presence of noise effect, choosing quantum strategy Q results in a winning payoff against the classical strategy D and, for example, the strategy profile ( Q, D, C) is Pareto optimal. We find that the unfair miracle move of Eisert from quantum strategic space is an effective strategy for accelerated players in decoherence mode (p=1) of the game.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Parameter estimation for chaotic systems using improved bird swarm algorithm
NASA Astrophysics Data System (ADS)
Xu, Chuangbiao; Yang, Renhuan
2017-12-01
Parameter estimation of chaotic systems is an important problem in nonlinear science and has aroused increasing interest of many research fields, which can be basically reduced to a multidimensional optimization problem. In this paper, an improved boundary bird swarm algorithm is used to estimate the parameters of chaotic systems. This algorithm can combine the good global convergence and robustness of the bird swarm algorithm and the exploitation capability of improved boundary learning strategy. Experiments are conducted on the Lorenz system and the coupling motor system. Numerical simulation results reveal the effectiveness and with desirable performance of IBBSA for parameter estimation of chaotic systems.
NASA Astrophysics Data System (ADS)
Feng, Haike; Zhang, Wei; Zhang, Jie; Chen, Xiaofei
2017-05-01
The perfectly matched layer (PML) is an efficient absorbing technique for numerical wave simulation. The complex frequency-shifted PML (CFS-PML) introduces two additional parameters in the stretching function to make the absorption frequency dependent. This can help to suppress converted evanescent waves from near grazing incident waves, but does not efficiently absorb low-frequency waves below the cut-off frequency. To absorb both the evanescent wave and the low-frequency wave, the double-pole CFS-PML having two poles in the coordinate stretching function was developed in computational electromagnetism. Several studies have investigated the performance of the double-pole CFS-PML for seismic wave simulations in the case of a narrowband seismic wavelet and did not find significant difference comparing to the CFS-PML. Another difficulty to apply the double-pole CFS-PML for real problems is that a practical strategy to set optimal parameter values has not been established. In this work, we study the performance of the double-pole CFS-PML for broad-band seismic wave simulation. We find that when the maximum to minimum frequency ratio is larger than 16, the CFS-PML will either fail to suppress the converted evanescent waves for grazing incident waves, or produce visible low-frequency reflection, depending on the value of α. In contrast, the double-pole CFS-PML can simultaneously suppress the converted evanescent waves and avoid low-frequency reflections with proper parameter values. We analyse the different roles of the double-pole CFS-PML parameters and propose optimal selections of these parameters. Numerical tests show that the double-pole CFS-PML with the optimal parameters can generate satisfactory results for broad-band seismic wave simulations.
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Application of an Evolution Strategy in Planetary Ephemeris Optimization
NASA Astrophysics Data System (ADS)
Mai, E.
2016-12-01
Classical planetary ephemeris construction comprises three major steps, which are performed iteratively: simultaneous numerical integration of coupled equations of motion of a multi-body system (propagator step), reduction of thousands of observations (reduction step), and optimization of various selected model parameters (adjustment step). This traditional approach is challenged by ongoing refinements in force modeling, e.g. inclusion of much more significant minor bodies, an ever-growing number of planetary observations, e.g. vast amount of spacecraft tracking data, etc. To master the high computational burden and in order to circumvent the need for inversion of huge normal equation matrices, we propose an alternative ephemeris construction method. The main idea is to solve the overall optimization problem by a straightforward direct evaluation of the whole set of mathematical formulas involved, rather than to solve it as an inverse problem with all its tacit mathematical assumptions and numerical difficulties. We replace the usual gradient search by a stochastic search, namely an evolution strategy, the latter of which is also perfect for the exploitation of parallel computing capabilities. Furthermore, this new approach enables multi-criteria optimization and time-varying optima. This issue will become important in future once ephemeris construction is just one part of even larger optimization problems, e.g. the combined and consistent determination of the physical state (orbit, size, shape, rotation, gravity,…) of celestial bodies (planets, satellites, asteroids, or comets), and if one seeks near real-time solutions. Here we outline the general idea and discuss first results. As an example, we present a simultaneous optimization of high-correlated asteroidal ring model parameters (total mass and heliocentric radius), based on simulations.
NASA Astrophysics Data System (ADS)
Shamshiri, Redmond Ramin; Jones, James W.; Thorp, Kelly R.; Ahmad, Desa; Man, Hasfalina Che; Taheri, Sima
2018-04-01
Greenhouse technology is a flexible solution for sustainable year-round cultivation of Tomato (Lycopersicon esculentum Mill), particularly in regions with adverse climate conditions or limited land and resources. Accurate knowledge about plant requirements at different growth stages, and under various light conditions, can contribute to the design of adaptive control strategies for a more cost-effective and competitive production. In this context, different scientific publications have recommended different values of microclimate parameters at different tomato growth stages. This paper provides a detailed summary of optimal, marginal and failure air and root-zone temperatures, relative humidity and vapour pressure deficit for successful greenhouse cultivation of tomato. Graphical representations of the membership function model to define the optimality degrees of these three parameters are included with a view to determining how close the greenhouse microclimate is to the optimal condition. Several production constraints have also been discussed to highlight the short and long-term effects of adverse microclimate conditions on the quality and yield of tomato, which are associated with interactions between suboptimal parameters, greenhouse environment and growth responses.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Dey, Pinaki; Rangarajan, Vivek
2017-10-01
Experimental investigations were carried out for Cupriavidus necator (MTCC 1472)-based improved production of poly-3 hydroxy butyrate (PHB) through induced nitrogen limiting fed-batch cultivation strategies. Initially Plackett-Burman design and response surface methodology were implemented to optimize most influencing process parameters. With optimized process parameter values, continuous feeding strategies ware applied in a 5-l fermenter with table sugar concentration of 100 g/l, nitrogen concentration of 0.12 g/l for fed-batch fermentation with varying dilution rates of 0.02 and 0.046 1/h. To get enriched production of PHB, concentration of the sugar was further increased to 150 and 200 g/l in feeding. Maximum concentrations of PHB achieved were 22.35 and 23.07 g/l at those dilution rates when sugar concentration maintains at 200 g/l in feeding. At maximum concentration of PHB (23.07 g/l), productivity of 0.58 g/l h was achieved with maximum PHB accumulation efficiency up to 64% of the dry weight of biomass. High purity of PHB, close to medical grade was achieved after surfactant hypochlorite extraction method, and it was further confirmed by SEM, EDX, and XRD studies.
Searches for millisecond pulsations in low-mass X-ray binaries
NASA Technical Reports Server (NTRS)
Wood, K. S.; Hertz, P.; Norris, J. P.; Vaughan, B. A.; Michelson, P. F.; Mitsuda, K.; Lewin, W. H. G.; Van Paradijs, J.; Penninx, W.; Van Der Klis, M.
1991-01-01
High-sensitivity search techniques for millisecond periods are presented and applied to data from the Japanese satellite Ginga and HEAO 1. The search is optimized for pulsed signals whose period, drift rate, and amplitude conform with what is expected for low-class X-ray binary (LMXB) sources. Consideration is given to how the current understanding of LMXBs guides the search strategy and sets these parameter limits. An optimized one-parameter coherence recovery technique (CRT) developed for recovery of phase coherence is presented. This technique provides a large increase in sensitivity over the method of incoherent summation of Fourier power spectra. The range of spin periods expected from LMXB phenomenology is discussed, the necessary constraints on the application of CRT are described in terms of integration time and orbital parameters, and the residual power unrecovered by the quadratic approximation for realistic cases is estimated.
NASA Astrophysics Data System (ADS)
Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.
2016-04-01
Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.
Optimization of spectroscopic surveys for testing non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier; Dalal, Neal, E-mail: alvise@caltech.edu, E-mail: Olivier.P.Dore@jpl.nasa.gov, E-mail: dalaln@illinois.edu
We investigate optimization strategies to measure primordial non-Gaussianity with future spectroscopic surveys. We forecast measurements coming from the 3D galaxy power spectrum and compute constraints on primordial non-Gaussianity parameters f{sub NL} and n{sub NG}. After studying the dependence on those parameters upon survey specifications such as redshift range, area, number density, we assume a reference mock survey and investigate the trade-off between number density and area surveyed. We then define the observational requirements to reach the detection of f{sub NL} of order 1. Our results show that power spectrum constraints on non-Gaussianity from future spectroscopic surveys can improve on currentmore » CMB limits, but the multi-tracer technique and higher order correlations will be needed in order to reach an even better precision in the measurements of the non-Gaussianity parameter f{sub NL}.« less
Efficient extraction strategies of tea (Camellia sinensis) biomolecules.
Banerjee, Satarupa; Chatterjee, Jyotirmoy
2015-06-01
Tea is a popular daily beverage worldwide. Modulation and modifications of its basic components like catechins, alkaloids, proteins and carbohydrate during fermentation or extraction process changes organoleptic, gustatory and medicinal properties of tea. Through these processes increase or decrease in yield of desired components are evident. Considering the varied impacts of parameters in tea production, storage and processes that affect the yield, extraction of tea biomolecules at optimized condition is thought to be challenging. Implementation of technological advancements in green chemistry approaches can minimize the deviation retaining maximum qualitative properties in environment friendly way. Existed extraction processes with optimization parameters of tea have been discussed in this paper including its prospects and limitations. This exhaustive review of various extraction parameters, decaffeination process of tea and large scale cost effective isolation of tea components with aid of modern technology can assist people to choose extraction condition of tea according to necessity.
Multi-Objective Lake Superior Regulation
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Razavi, S.; Tolson, B.
2011-12-01
At the direction of the International Joint Commission (IJC) the International Upper Great Lakes Study (IUGLS) Board is investigating possible changes to the present method of regulating the outflows of Lake Superior (SUP) to better meet the contemporary needs of the stakeholders. In this study, a new plan in the form of a rule curve that is directly interpretable for regulation of SUP is proposed. The proposed rule curve has 18 parameters that should be optimized. The IUGLS Board is also interested in a regulation strategy that considers potential effects of climate uncertainty. Therefore, the quality of the rule curve is assessed simultaneously for multiple supply sequences that represent various future climate scenarios. The rule curve parameters are obtained by solving a computationally intensive bi-objective simulation-optimization problem that maximizes the total increase in navigation and hydropower benefits of the new regulation plan and minimizes the sum of all normalized constraint violations. The objective and constraint values are obtained from a Microsoft Excel based Shared Vision Model (SVM) that compares any new SUP regulation plan with the current regulation policy. The underlying optimization problem is solved by a recently developed, highly efficient multi-objective optimization algorithm called Pareto Archived Dynamically Dimensioned Search (PA-DDS). To further improve the computational efficiency of the simulation-optimization problem, the model pre-emption strategy is used in a novel way to avoid the complete evaluation of regulation plans with low quality in both objectives. Results show that the generated rule curve is robust and typically more reliable when facing unpredictable climate conditions compared to other SUP regulation plans.
Pek, Han Bin; Klement, Maximilian; Ang, Kok Siong; Chung, Bevan Kai-Sheng; Ow, Dave Siak-Wei; Lee, Dong-Yup
2015-01-01
Various isoforms of invertases from prokaryotes, fungi, and higher plants has been expressed in Escherichia coli, and codon optimisation is a widely-adopted strategy for improvement of heterologous enzyme expression. Successful synthetic gene design for recombinant protein expression can be done by matching its translational elongation rate against heterologous host organisms via codon optimization. Amongst the various design parameters considered for the gene synthesis, codon context bias has been relatively overlooked compared to individual codon usage which is commonly adopted in most of codon optimization tools. In addition, matching the rates of transcription and translation based on secondary structure may lead to enhanced protein folding. In this study, we evaluated codon context fitness as design criterion for improving the expression of thermostable invertase from Thermotoga maritima in Escherichia coli and explored the relevance of secondary structure regions for folding and expression. We designed three coding sequences by using (1) a commercial vendor optimized gene algorithm, (2) codon context for the whole gene, and (3) codon context based on the secondary structure regions. Then, the codon optimized sequences were transformed and expressed in E. coli. From the resultant enzyme activities and protein yield data, codon context fitness proved to have the highest activity as compared to the wild-type control and other criteria while secondary structure-based strategy is comparable to the control. Codon context bias was shown to be a relevant parameter for enhancing enzyme production in Escherichia coli by codon optimization. Thus, we can effectively design synthetic genes within heterologous host organisms using this criterion. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Finding an optimal strategy for measuring the quality of groundwater as a source for drinking water
NASA Astrophysics Data System (ADS)
van Driezum, Inge; Saracevic, Ernis; Scheibz, Jürgen; Zessner, Matthias; Kirschner, Alexander; Sommer, Regina; Farnleitner, Andreas; Blaschke, Alfred Paul
2015-04-01
A good chemical and microbiological water quality is of great importance in riverbank filtration systems that are used as public water supplies. Water quality is ideally monitored frequently at the drinking water well using a steady pumping rate. Monitoring source water (like groundwater) however, can be more challenging. First of all, piezometers should be drilled in the correct layer of the aquifer. Secondly, the sampling design should include all preferred parameters (microbiological and chemical parameters) and should also take the hydrological conditions into account. In this study, we made use of different geophysical techniques (ERT and FDEM) to select the optimal placement of the piezometers. We also designed a sampling strategy which can be used to sample fecal indicators, biostability parameters, standard chemical parameters and a wide range of micropollutants. Several time series experiments were carried out in the study site Porous GroundWater Aquifer (PGWA) - an urban floodplain extending on the left bank of the river Danube downstream of the City of Vienna, Austria. The upper layer of the PGWA consist of silt and has a thickness from 1 to 6 meter. The underlying confined aquifer consists of sand and gravel and has a thickness of in between 3 and 15 meter. Hydraulic conductivities range from 5 x 10-2 m/s up to 5 x 10-5 m/s. Underneath the aquifer are alternating sand and clay/silt layers. As fecal markers Escherichia coli, enterococci and aerobic spores were measured. Biostability was measured using leucine incorporation. Additionally, several micropollutants and standard chemical parameters were measured. Results showed that physical and chemical parameters stayed stable in all monitoring wells during extended purging. A similar trend could be observed for E coli and enterococci. In the wells close to the river, aerobic spores and leucine incorporation decreased after 30 min. of pumping, whereas the well close to the backwater showed a different pattern. Overall, purging for 45 minutes was the optimal sampling procedure for the microbiological parameters. Samples for the detection of micropollutants were taken after 15 min. purging.
Chen, Z M; Ji, S B; Shi, X L; Zhao, Y Y; Zhang, X F; Jin, H
2017-02-10
Objective: To evaluate the cost-utility of different hepatitis E vaccination strategies in women aged 15 to 49. Methods: The Markov-decision tree model was constructed to evaluate the cost-utility of three hepatitis E virus vaccination strategies. Parameters of the models were estimated on the basis of published studies and experience of experts. Both methods on sensitivity and threshold analysis were used to evaluate the uncertainties of the model. Results: Compared with non-vaccination group, strategy on post-screening vaccination with rate as 100%, could save 0.10 quality-adjusted life years per capital in the women from the societal perspectives. After implementation of screening program and with the vaccination rate reaching 100%, the incremental cost utility ratio (ICUR) of vaccination appeared as 5 651.89 and 6 385.33 Yuan/QALY, respectively. Vaccination post to the implementation of a screening program, the result showed better benefit than the vaccination rate of 100%. Results from the sensitivity analysis showed that both the cost of hepatitis E vaccine and the inoculation compliance rate presented significant effects. If the cost were lower than 191.56 Yuan (RMB) or the inoculation compliance rate lower than 0.23, the vaccination rate of 100% strategy was better than the post-screening vaccination strategy, otherwise the post-screening vaccination strategy appeared the optimal strategy. Conclusion: Post-screening vaccination for women aged 15 to 49 from social perspectives seemed the optimal one but it had to depend on the change of vaccine cost and the rate of inoculation compliance.
Optimum allocation of test resources and comparison of breeding strategies for hybrid wheat.
Longin, C Friedrich H; Mi, Xuefei; Melchinger, Albrecht E; Reif, Jochen C; Würschum, Tobias
2014-10-01
The use of a breeding strategy combining the evaluation of line per se with testcross performance maximizes annual selection gain for hybrid wheat breeding. Recent experimental studies confirmed a high commercial potential for hybrid wheat requiring the design of optimum breeding strategies. Our objectives were to (1) determine the optimum allocation of the type and number of testers, the number of test locations and the number of doubled haploid lines for different breeding strategies, (2) identify the best breeding strategy and (3) elaborate key parameters for an efficient hybrid wheat breeding program. We performed model calculations using the selection gain for grain yield as target variable to optimize the number of lines, testers and test locations in four different breeding strategies. A breeding strategy (BS2) combining the evaluation of line per se performance and general combining ability (GCA) had a far larger annual selection gain across all considered scenarios than a breeding strategy (BS1) focusing only on GCA. In the combined strategy, the production of testcross seed conducted in parallel with the first yield trial for line per se performance (BS2rapid) resulted in a further increase of the annual selection gain. For the current situation in hybrid wheat, this relative superiority of the strategy BS2rapid amounted to 67 % in annual selection gain compared to BS1. Varying a large number of parameters, we identified the high costs for hybrid seed production and the low variance of GCA in hybrid wheat breeding as key parameters limiting selection gain in BS2rapid.
Ambrosini, Emilia; Ferrante, Simona; Schauer, Thomas; Ferrigno, Giancarlo; Molteni, Franco; Pedrocchi, Alessandra
2014-01-01
Cycling induced by Functional Electrical Stimulation (FES) training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury) were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.
Optimal integration strategies for a syngas fuelled SOFC and gas turbine hybrid
NASA Astrophysics Data System (ADS)
Zhao, Yingru; Sadhukhan, Jhuma; Lanzini, Andrea; Brandon, Nigel; Shah, Nilay
This article aims to develop a thermodynamic modelling and optimization framework for a thorough understanding of the optimal integration of fuel cell, gas turbine and other components in an ambient pressure SOFC-GT hybrid power plant. This method is based on the coupling of a syngas-fed SOFC model and an associated irreversible GT model, with an optimization algorithm developed using MATLAB to efficiently explore the range of possible operating conditions. Energy and entropy balance analysis has been carried out for the entire system to observe the irreversibility distribution within the plant and the contribution of different components. Based on the methodology developed, a comprehensive parametric analysis has been performed to explore the optimum system behavior, and predict the sensitivity of system performance to the variations in major design and operating parameters. The current density, operating temperature, fuel utilization and temperature gradient of the fuel cell, as well as the isentropic efficiencies and temperature ratio of the gas turbine cycle, together with three parameters related to the heat transfer between subsystems are all set to be controllable variables. Other factors affecting the hybrid efficiency have been further simulated and analysed. The model developed is able to predict the performance characteristics of a wide range of hybrid systems potentially sizing from 2000 to 2500 W m -2 with efficiencies varying between 50% and 60%. The analysis enables us to identify the system design tradeoffs, and therefore to determine better integration strategies for advanced SOFC-GT systems.
An Optimal Parameter Discretization Strategy for Multiple Model Adaptive Estimation and Control
1989-12-01
Zicker . MMAE-Based Control with Space- Time Point Process Observations. IEEE Transactions on Aerospace and Elec- tronic Systems, AES-21 (3):292-300, 1985...Transactions of the Conference of Army Math- ematicians, Bethesda MD, 1982. (AD-POO1 033). 65. William L. Zicker . Pointing and Tracking of Particle
Sun, Yongfu; Cheng, Hao; Gao, Shan; Liu, Qinghua; Sun, Zhihu; Xiao, Chong; Wu, Changzheng; Wei, Shiqiang; Xie, Yi
2012-12-19
Thermoelectric materials can realize significant energy savings by generating electricity from untapped waste heat. However, the coupling of the thermoelectric parameters unfortunately limits their efficiency and practical applications. Here, a single-layer-based (SLB) composite fabricated from atomically thick single layers was proposed to optimize the thermoelectric parameters fully. Freestanding five-atom-thick Bi(2)Se(3) single layers were first synthesized via a scalable interaction/exfoliation strategy. As revealed by X-ray absorption fine structure spectroscopy and first-principles calculations, surface distortion gives them excellent structural stability and a much increased density of states, resulting in a 2-fold higher electrical conductivity relative to the bulk material. Also, the surface disorder and numerous interfaces in the Bi(2)Se(3) SLB composite allow for effective phonon scattering and decreased thermal conductivity, while the 2D electron gas and energy filtering effect increase the Seebeck coefficient, resulting in an 8-fold higher figure of merit (ZT) relative to the bulk material. This work develops a facile strategy for synthesizing atomically thick single layers and demonstrates their superior ability to optimize the thermoelectric energy harvesting.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2011-12-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2012-03-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
NASA Astrophysics Data System (ADS)
Ayad, G.; Song, J.; Barriere, T.; Liu, B.; Gelin, J. C.
2007-05-01
The paper is concerned with optimization and parametric identification of Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders parts by solid state diffusion. In the first part, one describes an original methodology to optimize the injection stage based on the combination of Design Of Experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometer curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization for manufacturing of a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Becht, Etienne; Simoni, Yannick; Coustan-Smith, Elaine; Maximilien, Evrard; Cheng, Yang; Ng, Lai Guan; Campana, Dario; Newell, Evan
2018-06-21
Recent flow and mass cytometers generate datasets of dimensions 20 to 40 and a million single cells. From these, many tools facilitate the discovery of new cell populations associated with diseases or physiology. These new cell populations require the identification of new gating strategies, but gating strategies become exponentially more difficult to optimize when dimensionality increases. To facilitate this step, we developed Hypergate, an algorithm which given a cell population of interest identifies a gating strategy optimized for high yield and purity. Hypergate achieves higher yield and purity than human experts, Support Vector Machines and Random-Forests on public datasets. We use it to revisit some established gating strategies for the identification of innate lymphoid cells, which identifies concise and efficient strategies that allow gating these cells with fewer parameters but higher yield and purity than the current standards. For phenotypic description, Hypergate's outputs are consistent with fields' knowledge and sparser than those from a competing method. Hypergate is implemented in R and available on CRAN. The source code is published at http://github.com/ebecht/hypergate under an Open Source Initiative-compliant licence. Supplementary data are available at Bioinformatics online.
Nonlinear Adaptive PID Control for Greenhouse Environment Based on RBF Network
Zeng, Songwei; Hu, Haigen; Xu, Lihong; Li, Guanghui
2012-01-01
This paper presents a hybrid control strategy, combining Radial Basis Function (RBF) network with conventional proportional, integral, and derivative (PID) controllers, for the greenhouse climate control. A model of nonlinear conservation laws of enthalpy and matter between numerous system variables affecting the greenhouse climate is formulated. RBF network is used to tune and identify all PID gain parameters online and adaptively. The presented Neuro-PID control scheme is validated through simulations of set-point tracking and disturbance rejection. We compare the proposed adaptive online tuning method with the offline tuning scheme that employs Genetic Algorithm (GA) to search the optimal gain parameters. The results show that the proposed strategy has good adaptability, strong robustness and real-time performance while achieving satisfactory control performance for the complex and nonlinear greenhouse climate control system, and it may provide a valuable reference to formulate environmental control strategies for actual application in greenhouse production. PMID:22778587
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
Cai, Y L; Zhang, S X; Yang, P C; Lin, Y
2016-06-01
Through cost-benefit analysis (CBA), cost-effectiveness analysis (CEA) and quantitative optimization analysis to understand the economic benefit and outcomes of strategy regarding preventing mother-to-child transmission (PMTCT) on hepatitis B virus. Based on the principle of Hepatitis B immunization decision analytic-Markov model, strategies on PMTCT and universal vaccination were compared. Related parameters of Shenzhen were introduced to the model, a birth cohort was set up as the study population in 2013. The net present value (NPV), benefit-cost ratio (BCR), incremental cost-effectiveness ratio (ICER) were calculated and the differences between CBA and CEA were compared. A decision tree was built as the decision analysis model for hepatitis B immunization. Three kinds of Markov models were used to simulate the outcomes after the implementation of vaccination program. The PMTCT strategy of Shenzhen showed a net-gain as 38 097.51 Yuan/per person in 2013, with BCR as 14.37. The universal vaccination strategy showed a net-gain as 37 083.03 Yuan/per person, with BCR as 12.07. Data showed that the PMTCT strategy was better than the universal vaccination one and would end with gaining more economic benefit. When comparing with the universal vaccination program, the PMTCT strategy would save 85 100.00 Yuan more on QALY gains for every person. The PMTCT strategy seemed more cost-effective compared with the one under universal vaccination program. In the CBA and CEA hepatitis B immunization programs, the immunization coverage rate and costs of hepatitis B related diseases were the most important influencing factors. Outcomes of joint-changes of all the parameters in CEA showed that PMTCT strategy was a more cost-effective. The PMTCT strategy gained more economic benefit and effects on health. However, the cost of PMTCT strategy was more than the universal vaccination program, thus it is important to pay attention to the process of PMTCT strategy and the universal vaccination program. CBA seemed suitable for strategy optimization while CEA was better for strategy evaluation. Hopefully, programs as combination of the above said two methods would facilitate the process of economic evaluation.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Dentoni, Marta; Deidda, Roberto; Paniconi, Claudio; Qahman, Khalid; Lecca, Giuditta
2015-03-01
Seawater intrusion is one of the major threats to freshwater resources in coastal areas, often exacerbated by groundwater overexploitation. Mitigation measures are needed to properly manage aquifers, and to restore groundwater quality. This study integrates three computational tools into a unified framework to investigate seawater intrusion in coastal areas and to assess strategies for managing groundwater resources under natural and human-induced stresses. The three components are a three-dimensional hydrogeological model for density-dependent variably saturated flow and miscible salt transport, an automatic calibration procedure that uses state variable outputs from the model to estimate selected model parameters, and an optimization module that couples a genetic algorithm with the simulation model. The computational system is used to rank alternative strategies for mitigation of seawater intrusion, taking into account conflicting objectives and problem constraints. It is applied to the Gaza Strip (Palestine) coastal aquifer to identify a feasible groundwater management strategy for the period 2011-2020. The optimized solution is able to: (1) keep overall future abstraction from municipal groundwater wells close to the user-defined maximum level, (2) increase the average groundwater heads, and (3) lower both the total mass of salt extracted and the extent of the areas affected by seawater intrusion.
Munters, W; Meyers, J
2017-04-13
Complex turbine wake interactions play an important role in overall energy extraction in large wind farms. Current control strategies optimize individual turbine power, and lead to significant energy losses in wind farms compared with lone-standing wind turbines. In recent work, an optimal coordinated control framework was introduced (Goit & Meyers 2015 J. Fluid Mech. 768 , 5-50 (doi:10.1017/jfm.2015.70)). Here, we further elaborate on this framework, quantify the influence of optimization parameters and introduce new simulation results for which gains in power production of up to 21% are observed.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Authors.
Munters, W.
2017-01-01
Complex turbine wake interactions play an important role in overall energy extraction in large wind farms. Current control strategies optimize individual turbine power, and lead to significant energy losses in wind farms compared with lone-standing wind turbines. In recent work, an optimal coordinated control framework was introduced (Goit & Meyers 2015 J. Fluid Mech. 768, 5–50 (doi:10.1017/jfm.2015.70)). Here, we further elaborate on this framework, quantify the influence of optimization parameters and introduce new simulation results for which gains in power production of up to 21% are observed. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265024
Optimized parameter estimation in the presence of collective phase noise
NASA Astrophysics Data System (ADS)
Altenburg, Sanah; Wölk, Sabine; Tóth, Géza; Gühne, Otfried
2016-11-01
We investigate phase and frequency estimation with different measurement strategies under the effect of collective phase noise. First, we consider the standard linear estimation scheme and present an experimentally realizable optimization of the initial probe states by collective rotations. We identify the optimal rotation angle for different measurement times. Second, we show that subshot noise sensitivity—up to the Heisenberg limit—can be reached in presence of collective phase noise by using differential interferometry, where one part of the system is used to monitor the noise. For this, not only Greenberger-Horne-Zeilinger states but also symmetric Dicke states are suitable. We investigate the optimal splitting for a general symmetric Dicke state at both inputs and discuss possible experimental realizations of differential interferometry.
Expert-guided optimization for 3D printing of soft and liquid materials.
Abdollahi, Sara; Davis, Alexander; Miller, John H; Feinberg, Adam W
2018-01-01
Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained.
Expert-guided optimization for 3D printing of soft and liquid materials
Abdollahi, Sara; Davis, Alexander; Miller, John H.
2018-01-01
Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained. PMID:29621286
Experimental Design for Parameter Estimation of Gene Regulatory Networks
Timmer, Jens
2012-01-01
Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723
Raut, Sangeeta; Raut, Smita; Sharma, Manisha; Srivastav, Chaitanya; Adhikari, Basudam; Sen, Sudip Kumar
2015-09-01
In the present study, artificial neural network (ANN) modelling coupled with particle swarm optimization (PSO) algorithm was used to optimize the process variables for enhanced low density polyethylene (LDPE) degradation by Curvularia lunata SG1. In the non-linear ANN model, temperature, pH, contact time and agitation were used as input variables and polyethylene bio-degradation as the output variable. Further, on application of PSO to the ANN model, the optimum values of the process parameters were as follows: pH = 7.6, temperature = 37.97 °C, agitation rate = 190.48 rpm and incubation time = 261.95 days. A comparison between the model results and experimental data gave a high correlation coefficient ([Formula: see text]). Significant enhancement of LDPE bio-degradation using C. lunata SG1by about 48 % was achieved under optimum conditions. Thus, the novelty of the work lies in the application of combination of ANN-PSO as optimization strategy to enhance the bio-degradation of LDPE.
Robust optimization of supersonic ORC nozzle guide vanes
NASA Astrophysics Data System (ADS)
Bufi, Elio A.; Cinnella, Paola
2017-03-01
An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).
Gahlawat, Geeta; Srivastava, Ashok K
2012-11-01
Polyhydroxybutyrate or PHB is a biodegradable and biocompatible thermoplastic with many interesting applications in medicine, food packaging, and tissue engineering materials. The present study deals with the enhanced production of PHB by Azohydromonas australica using sucrose and the estimation of fundamental kinetic parameters of PHB fermentation process. The preliminary culture growth inhibition studies were followed by statistical optimization of medium recipe using response surface methodology to increase the PHB production. Later on batch cultivation in a 7-L bioreactor was attempted using optimum concentration of medium components (process variables) obtained from statistical design to identify the batch growth and product kinetics parameters of PHB fermentation. A. australica exhibited a maximum biomass and PHB concentration of 8.71 and 6.24 g/L, respectively in bioreactor with an overall PHB production rate of 0.75 g/h. Bioreactor cultivation studies demonstrated that the specific biomass and PHB yield on sucrose was 0.37 and 0.29 g/g, respectively. The kinetic parameters obtained in the present investigation would be used in the development of a batch kinetic mathematical model for PHB production which will serve as launching pad for further process optimization studies, e.g., design of several bioreactor cultivation strategies to further enhance the biopolymer production.
Wang, Xi-fen; Zhou, Huai-chun
2005-01-01
The control of 3-D temperature distribution in a utility boiler furnace is essential for the safe, economic and clean operation of pc-fired furnace with multi-burner system. The development of the visualization of 3-D temperature distributions in pc-fired furnaces makes it possible for a new combustion control strategy directly with the furnace temperature as its goal to improve the control quality for the combustion processes. Studied in this paper is such a new strategy that the whole furnace is divided into several parts in the vertical direction, and the average temperature and its bias from the center in every cross section can be extracted from the visualization results of the 3-D temperature distributions. In the simulation stage, a computational fluid dynamics (CFD) code served to calculate the 3-D temperature distributions in a furnace, then a linear model was set up to relate the features of the temperature distributions with the input of the combustion processes, such as the flow rates of fuel and air fed into the furnaces through all the burners. The adaptive genetic algorithm was adopted to find the optimal combination of the whole input parameters which ensure to form an optimal 3-D temperature field in the furnace desired for the operation of boiler. Simulation results showed that the strategy could soon find the factors making the temperature distribution apart from the optimal state and give correct adjusting suggestions.
NASA Astrophysics Data System (ADS)
Harkouss, F.; Biwole, P. H.; Fardoun, F.
2018-05-01
Buildings’ optimization is a smart method to inspect the available design choices starting from passive strategies, to energy efficient systems and finally towards the adequate renewable energy system to be implemented. This paper outlines the methodology and the cost-effectiveness potential for optimizing the design of net-zero energy building in a French city; Embrun. The non-dominated sorting genetic algorithm is chosen in order to minimize thermal, electrical demands and life cycle cost while reaching the net zero energy balance; and thus getting the Pareto-front. Elimination and Choice Expressing the Reality decision making method is applied to the Pareto-front so as to obtain one optimal solution. A wide range of energy efficiency measures are investigated, besides solar energy systems are employed to produce required electricity and hot water for domestic purposes. The results indicate that the appropriate selection of the passive parameters is very important and critical in reducing the building energy consumption. The optimum design parameters yield to a decrease of building’s thermal loads and life cycle cost by 32.96% and 14.47% respectively.
Towards inverse modeling of turbidity currents: The inverse lock-exchange problem
NASA Astrophysics Data System (ADS)
Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison
2011-04-01
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.
Genome-Wide Comparative Gene Family Classification
Frech, Christian; Chen, Nansheng
2010-01-01
Correct classification of genes into gene families is important for understanding gene function and evolution. Although gene families of many species have been resolved both computationally and experimentally with high accuracy, gene family classification in most newly sequenced genomes has not been done with the same high standard. This project has been designed to develop a strategy to effectively and accurately classify gene families across genomes. We first examine and compare the performance of computer programs developed for automated gene family classification. We demonstrate that some programs, including the hierarchical average-linkage clustering algorithm MC-UPGMA and the popular Markov clustering algorithm TRIBE-MCL, can reconstruct manual curation of gene families accurately. However, their performance is highly sensitive to parameter setting, i.e. different gene families require different program parameters for correct resolution. To circumvent the problem of parameterization, we have developed a comparative strategy for gene family classification. This strategy takes advantage of existing curated gene families of reference species to find suitable parameters for classifying genes in related genomes. To demonstrate the effectiveness of this novel strategy, we use TRIBE-MCL to classify chemosensory and ABC transporter gene families in C. elegans and its four sister species. We conclude that fully automated programs can establish biologically accurate gene families if parameterized accordingly. Comparative gene family classification finds optimal parameters automatically, thus allowing rapid insights into gene families of newly sequenced species. PMID:20976221
Wong, Wicger K H; Leung, Lucullus H T; Kwong, Dora L W
2016-01-01
To evaluate and optimize the parameters used in multiple-atlas-based segmentation of prostate cancers in radiation therapy. A retrospective study was conducted, and the accuracy of the multiple-atlas-based segmentation was tested on 30 patients. The effect of library size (LS), number of atlases used for contour averaging and the contour averaging strategy were also studied. The autogenerated contours were compared with the manually drawn contours. Dice similarity coefficient (DSC) and Hausdorff distance were used to evaluate the segmentation agreement. Mixed results were found between simultaneous truth and performance level estimation (STAPLE) and majority vote (MV) strategies. Multiple-atlas approaches were relatively insensitive to LS. A LS of ten was adequate, and further increase in the LS only showed insignificant gain. Multiple atlas performed better than single atlas for most of the time. Using more atlases did not guarantee better performance, with five atlases performing better than ten atlases. With our recommended setting, the median DSC for the bladder, rectum, prostate, seminal vesicle and femurs was 0.90, 0.77, 0.84, 0.56 and 0.95, respectively. Our study shows that multiple-atlas-based strategies have better accuracy than single-atlas approach. STAPLE is preferred, and a LS of ten is adequate for prostate cases. Using five atlases for contour averaging is recommended. The contouring accuracy of seminal vesicle still needs improvement, and manual editing is still required for the other structures. This article provides a better understanding of the influence of the parameters used in multiple-atlas-based segmentation of prostate cancers.
Optimizing antibody expression: The nuts and bolts.
Ayyar, B Vijayalakshmi; Arora, Sushrut; Ravi, Shiva Shankar
2017-03-01
Antibodies are extensively utilized entities in biomedical research, and in the development of diagnostics and therapeutics. Many of these applications require high amounts of antibodies. However, meeting this ever-increasing demand of antibodies in the global market is one of the outstanding challenges. The need to maintain a balance between demand and supply of antibodies has led the researchers to discover better means and methods for optimizing their expression. These strategies aim to increase the volumetric productivity of the antibodies along with the reduction of associated manufacturing costs. Recent years have witnessed major advances in recombinant protein technology, owing to the introduction of novel cloning strategies, gene manipulation techniques, and an array of cell and vector engineering techniques, together with the progress in fermentation technologies. These innovations were also highly beneficial for antibody expression. Antibody expression depends upon the complex interplay of multiple factors that may require fine tuning at diverse levels to achieve maximum yields. However, each antibody is unique and requires individual consideration and customization for optimizing the associated expression parameters. This review provides a comprehensive overview of several state-of-the-art approaches, such as host selection, strain engineering, codon optimization, gene optimization, vector modification and process optimization that are deemed suitable for enhancing antibody expression. Copyright © 2017 Elsevier Inc. All rights reserved.
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2013-01-01
Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383
NASA Astrophysics Data System (ADS)
Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd
2018-04-01
Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.
An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor
NASA Astrophysics Data System (ADS)
Do, Q. B.; Choi, H.; Roh, G. H.
2006-10-01
This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation
Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers
Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray
2014-01-01
We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843
Coordinated Control of Cross-Flow Turbines
NASA Astrophysics Data System (ADS)
Strom, Benjamin; Brunton, Steven; Polagye, Brian
2016-11-01
Cross-flow turbines, also known as vertical-axis turbines, have several advantages over axial-flow turbines for a number of applications including urban wind power, high-density arrays, and marine or fluvial currents. By controlling the angular velocity applied to the turbine as a function of angular blade position, we have demonstrated a 79 percent increase in cross-flow turbine efficiency over constant-velocity control. This strategy uses the downhill simplex method to optimize control parameter profiles during operation of a model turbine in a recirculating water flume. This optimization method is extended to a set of two turbines, where the blade motions and position of the downstream turbine are optimized to beneficially interact with the coherent structures in the wake of the upstream turbine. This control scheme has the potential to enable high-density arrays of cross-flow turbines to operate at cost-effective efficiency. Turbine wake and force measurements are analyzed for insight into the effect of a coordinated control strategy.
Motion Correction in PROPELLER and Turboprop-MRI
Tamhane, Ashish A.; Arfanakis, Konstantinos
2009-01-01
PROPELLER and Turboprop-MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo and gradient and spin-echo, respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop-MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that, blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop-MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction were discussed for PROPELLER and Turboprop-MRI. PMID:19365858
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
Optimal coordination and control of posture and movements.
Johansson, Rolf; Fransson, Per-Anders; Magnusson, Måns
2009-01-01
This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton-Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and movement model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses. Validation examples with experimental data are provided.
Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft
NASA Astrophysics Data System (ADS)
Rasotto, M.; Armellin, R.; Di Lizia, P.
2016-03-01
An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.
Optimal strategies for electric energy contract decision making
NASA Astrophysics Data System (ADS)
Song, Haili
2000-10-01
The power industry restructuring in various countries in recent years has created an environment where trading of electric energy is conducted in a market environment. In such an environment, electric power companies compete for the market share through spot and bilateral markets. Being profit driven, electric power companies need to make decisions on spot market bidding, contract evaluation, and risk management. New methods and software tools are required to meet these upcoming needs. In this research, bidding strategy and contract pricing are studied from a market participant's viewpoint; new methods are developed to guide a market participant in spot and bilateral market operation. A supplier's spot market bidding decision is studied. Stochastic optimization is formulated to calculate a supplier's optimal bids in a single time period. This decision making problem is also formulated as a Markov Decision Process. All the competitors are represented by their bidding parameters with corresponding probabilities. A systematic method is developed to calculate transition probabilities and rewards. The optimal strategy is calculated to maximize the expected reward over a planning horizon. Besides the spot market, a power producer can also trade in the bilateral markets. Bidding strategies in a bilateral market are studied with game theory techniques. Necessary and sufficient conditions of Nash Equilibrium (NE) bidding strategy are derived based on the generators' cost and the loads' willingness to pay. The study shows that in any NE, market efficiency is achieved. Furthermore, all Nash equilibria are revenue equivalent for the generators. The pricing of "Flexible" contracts, which allow delivery flexibility over a period of time with a fixed total amount of electricity to be delivered, is analyzed based on the no-arbitrage pricing principle. The proposed algorithm calculates the price based on the optimality condition of the stochastic optimization formulation. Simulation examples illustrate the tradeoffs between prices and scheduling flexibility. Spot bidding and contract pricing are not independent decision processes. The interaction between spot bidding and contract evaluation is demonstrated with game theory equilibrium model and market simulation results. It leads to the conclusion that a market participant's contract decision making needs to be further investigated as an integrated optimization formulation.
Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump
NASA Astrophysics Data System (ADS)
Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.
2017-04-01
In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao
2013-07-01
This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.
Vaisali, C; Belur, Prasanna D; Regupathi, Iyyaswami
2017-10-01
Lipophilization of antioxidants is recognized as an effective strategy to enhance solubility and thus effectiveness in lipid based food. In this study, an effort was made to optimize rutin fatty ester synthesis in two different solvent systems to understand the influence of reaction system hydrophobicity on the optimum conditions using immobilised Candida antartica lipase. Under unoptimized conditions, 52.14% and 13.02% conversion was achieved in acetone and tert-butanol solvent systems, respectively. Among all the process parameters, water activity of the system was found to show highest influence on the conversion in each reaction system. In the presence of molecular sieves, the ester production increased to 62.9% in tert-butanol system, unlike acetone system. Under optimal conditions, conversion increased to 60.74% and 65.73% in acetone and tert-butanol system, respectively. This study shows, maintaining optimal water activity is crucial in reaction systems having polar solvents compared to more non-polar solvents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robust Design Optimization via Failure Domain Bounding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2007-01-01
This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.
Calvet, Christophe Y; Thalmensi, Jessie; Liard, Christelle; Pliquet, Elodie; Bestetti, Thomas; Huet, Thierry; Langlade-Demoyen, Pierre; Mir, Lluis M
2014-01-01
DNA vaccination consists in administering an antigen-encoding plasmid in order to trigger a specific immune response. This specific vaccine strategy is of particular interest to fight against various infectious diseases and cancer. Gene electrotransfer is the most efficient and safest non-viral gene transfer procedure and specific electrical parameters have been developed for several target tissues. Here, a gene electrotransfer protocol into the skin has been optimized in mice for efficient intradermal immunization against the well-known telomerase tumor antigen. First, the luciferase reporter gene was used to evaluate gene electrotransfer efficiency into the skin as a function of the electrical parameters and electrodes, either non-invasive or invasive. In a second time, these parameters were tested for their potency to generate specific cellular CD8 immune responses against telomerase epitopes. These CD8 T-cells were fully functional as they secreted IFNγ and were endowed with specific cytotoxic activity towards target cells. This simple and optimized procedure for efficient gene electrotransfer into the skin using the telomerase antigen is to be used in cancer patients for the phase 1 clinical evaluation of a therapeutic cancer DNA vaccine called INVAC-1. PMID:26015983
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
A link-adding strategy for transport efficiency of complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Han, Weizhan; Guo, Qing; Wang, Zhenyong; Zhang, Shuai
2016-12-01
The transport efficiency is one of the critical parameters to evaluate the performance of a network. In this paper, we propose an improved efficient (IE) strategy to enhance the network transport efficiency of complex networks by adding a fraction of links to an existing network based on the node’s local degree centrality and the shortest path length. Simulation results show that the proposed strategy can bring better traffic capacity and shorter average shortest path length than the low-degree-first (LDF) strategy under the shortest path routing protocol. It is found that the proposed strategy is beneficial to the improvement of overall traffic handling and delivering ability of the network. This study can alleviate the congestion in networks, and is helpful to design and optimize realistic networks.
A model for HIV/AIDS pandemic with optimal control
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2015-05-01
Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines
Presas, Alexandre; Valero, Carme; Egusquiza, Eduard
2018-01-01
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin. PMID:29601512
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines.
Presas, Alexandre; Valentin, David; Egusquiza, Mònica; Valero, Carme; Egusquiza, Eduard
2018-03-30
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin.
On a New Optimization Approach for the Hydroforming of Defects-Free Tubular Metallic Parts
NASA Astrophysics Data System (ADS)
Caseiro, J. F.; Valente, R. A. F.; Andrade-Campos, A.; Jorge, R. M. Natal
2011-05-01
In the hydroforming of tubular metallic components, process parameters (internal pressure, axial feed and counter-punch position) must be carefully set in order to avoid defects in the final part. If, on one hand, excessive pressure may lead to thinning and bursting during forming, on the other hand insufficient pressure may lead to an inadequate filling of the die. Similarly, an excessive axial feeding may lead to the formation of wrinkles, whilst an inadequate one may cause thinning and, consequentially, bursting. These apparently contradictory targets are virtually impossible to achieve without trial-and-error procedures in industry, unless optimization approaches are formulated and implemented for complex parts. In this sense, an optimization algorithm based on differentialevolutionary techniques is presented here, capable of being applied in the determination of the adequate process parameters for the hydroforming of metallic tubular components of complex geometries. The Hybrid Differential Evolution Particle Swarm Optimization (HDEPSO) algorithm, combining the advantages of a number of well-known distinct optimization strategies, acts along with a general purpose implicit finite element software, and is based on the definition of a wrinkling and thinning indicators. If defects are detected, the algorithm automatically corrects the process parameters and new numerical simulations are performed in real time. In the end, the algorithm proved to be robust and computationally cost-effective, thus providing a valid design tool for the conformation of defects-free components in industry [1].
Milando, Chad W.; Martenies, Sheena E.; Batterman, Stuart A.
2017-01-01
In air quality management, reducing emissions from pollutant sources often forms the primary response to attaining air quality standards and guidelines. Despite the broad success of air quality management in the US, challenges remain. As examples: allocating emissions reductions among multiple sources is complex and can require many rounds of negotiation; health impacts associated with emissions, the ultimate driver for the standards, are not explicitly assessed; and long dispersion model run-times, which result from the increasing size and complexity of model inputs, limit the number of scenarios that can be evaluated, thus increasing the likelihood of missing an optimal strategy. A new modeling framework, called the "Framework for Rapid Emissions Scenario and Health impact ESTimation" (FRESH-EST), is presented to respond to these challenges. FRESH-EST estimates concentrations and health impacts of alternative emissions scenarios at the urban scale, providing efficient computations from emissions to health impacts at the Census block or other desired spatial scale. In addition, FRESH-EST can optimize emission reductions to meet specified environmental and health constraints, and a convenient user interface and graphical displays are provided to facilitate scenario evaluation. The new framework is demonstrated in an SO2 non-attainment area in southeast Michigan with two optimization strategies: the first minimizes emission reductions needed to achieve a target concentration; the second minimizes concentrations while holding constant the cumulative emissions across local sources (e.g., an emissions floor). The optimized strategies match outcomes in the proposed SO2 State Implementation Plan without the proposed stack parameter modifications or shutdowns. In addition, the lower health impacts estimated for these strategies suggest the potential for FRESH-EST to identify pollution control alternatives for air quality management planning. PMID:27318620
NASA Astrophysics Data System (ADS)
Braun, Robert Joseph
The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.
Carrara, Mauro; Cusumano, Davide; Giandini, Tommaso; Tenconi, Chiara; Mazzarella, Ester; Grisotto, Simone; Massari, Eleonora; Mazzeo, Davide; Cerrotta, Annamaria; Pappalardi, Brigida; Fallai, Carlo; Pignoli, Emanuele
2017-12-01
A direct planning approach with multi-channel vaginal cylinders (MVCs) used for HDR brachytherapy of vaginal cancers is particularly challenging. Purpose of this study was to compare the dosimetric performances of different forward and inverse methods used for the optimization of MVC-based vaginal treatments for endometrial cancer, with a particular attention to the definition of strategies useful to limit the high doses to the vaginal mucosa. Twelve postoperative vaginal HDR brachytherapy treatments performed with MVCs were considered. Plans were retrospectively optimized with three different methods: Dose Point Optimization followed by Graphical Optimization (DPO + GrO), Inverse Planning Simulated Annealing with two different class solutions as starting conditions (surflPSA and homogIPSA) and Hybrid Inverse Planning Optimization (HIPO). Several dosimetric parameters related to target coverage, hot spot extensions and sparing of organs at risk were analyzed to evaluate the quality of the achieved treatment plans. Dose homogeneity index (DHI), conformal index (COIN) and a further parameter quantifying the proportion of the central catheter loading with respect to the overall loading (i.e., the central catheter loading index: CCLI) were also quantified. The achieved PTV coverage parameters were highly correlated with each other but uncorrelated with the hot spot quantifiers. HomogIPSA and HIPO achieved higher DHIs and CCLIs and lower volumes of high doses than DPO + GrO and surflPSA. Within the investigated optimization methods, HIPO and homoglPSA showed the highest dose homogeneity to the target. In particular, homogIPSA resulted also the most effective in reducing hot spots to the vaginal mucosa. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.
Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad
2015-01-01
Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
NASA Astrophysics Data System (ADS)
Miharja, M.; Priadi, Y. N.
2018-05-01
Promoting a better public transport is a key strategy to cope with urban transport problems which are mostly caused by a huge private vehicle usage. A better public transport service quality not only focuses on one type of public transport mode, but also concerns on inter modes service integration. Fragmented inter mode public transport service leads to a longer trip chain as well as average travel time which would result in its failure to compete with a private vehicle. This paper examines the optimation process of operation system integration between Trans Jakarta Bus as the main public transport mode and Kopaja Bus as feeder public transport service in Jakarta. Using scoring-interview method combined with standard parameters in operation system integration, this paper identifies the key factors that determine the success of the two public transport operation system integrations. The study found that some key integration parameters, such as the cancellation of “system setoran”, passenger get in-get out at official stop points, and systematic payment, positively contribute to a better service integration. However, some parameters such as fine system, time and changing point reliability, and information system reliability are among those which need improvement. These findings are very useful for the authority to set the right strategy to improve operation system integration between Trans Jakarta and Kopaja Bus services.
Optimal pacing strategy: from theoretical modelling to reality in 1500-m speed skating.
Hettinga, F J; De Koning, J J; Schmidt, L J I; Wind, N A C; Macintosh, B R; Foster, C
2011-01-01
Athletes are trained to choose the pace which is perceived to be correct during a specific effort, such as the 1500-m speed skating competition. The purpose of the present study was to "override" self-paced (SP) performance by instructing athletes to execute a theoretically optimal pacing profile. Seven national-level speed-skaters performed a SP 1500-m which was analysed by obtaining velocity (every 100 m) and body position (every 200 m) with video to calculate total mechanical power output. Together with gross efficiency and aerobic kinetics, obtained in separate trials, data were used to calculate aerobic and anaerobic power output profiles. An energy flow model was applied to SP, simulating a range of pacing strategies, and a theoretically optimal pacing profile was imposed in a second race (IM). Final time for IM was ∼2 s slower than SP. Total power distribution per lap differed, with a higher power over the first 300 m for IM (637.0 (49.4) vs 612.5 (50.0) W). Anaerobic parameters did not differ. The faster first lap resulted in a higher aerodynamic drag coefficient and perhaps a less effective push-off. Experienced athletes have a well-developed performance template, and changing pacing strategy towards a theoretically optimal fast start protocol had negative consequences on speed-skating technique and did not result in better performance.
Optimization of robustness of interdependent network controllability by redundant design
2018-01-01
Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426
Anderson, D.R.
1974-01-01
Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.
Optimized Production of Xylitol from Xylose Using a Hyper-Acidophilic Candida tropicalis.
Tamburini, Elena; Costa, Stefania; Marchetti, Maria Gabriella; Pedrini, Paola
2015-08-19
The yeast Candida tropicalis DSM 7524 produces xylitol, a natural, low-calorie sweetener, by fermentation of xylose. In order to increase xylitol production rate during the submerged fermentation process, some parameters-substrate (xylose) concentration, pH, aeration rate, temperature and fermentation strategy-have been optimized. The maximum xylitol yield reached at 60-80 g/L initial xylose concentration, pH 5.5 at 37 °C was 83.66% (w/w) on consumed xylose in microaerophilic conditions (kLa = 2·h(-1)). Scaling up on 3 L fermenter, with a fed-batch strategy, the best xylitol yield was 86.84% (w/w), against a 90% of theoretical yield. The hyper-acidophilic behaviour of C. tropicalis makes this strain particularly promising for industrial application, due to the possibility to work in non-sterile conditions.
Optimized Production of Xylitol from Xylose Using a Hyper-Acidophilic Candida tropicalis
Tamburini, Elena; Costa, Stefania; Marchetti, Maria Gabriella; Pedrini, Paola
2015-01-01
The yeast Candida tropicalis DSM 7524 produces xylitol, a natural, low-calorie sweetener, by fermentation of xylose. In order to increase xylitol production rate during the submerged fermentation process, some parameters-substrate (xylose) concentration, pH, aeration rate, temperature and fermentation strategy-have been optimized. The maximum xylitol yield reached at 60–80 g/L initial xylose concentration, pH 5.5 at 37 °C was 83.66% (w/w) on consumed xylose in microaerophilic conditions (kLa = 2·h−1). Scaling up on 3 L fermenter, with a fed-batch strategy, the best xylitol yield was 86.84% (w/w), against a 90% of theoretical yield. The hyper-acidophilic behaviour of C. tropicalis makes this strain particularly promising for industrial application, due to the possibility to work in non-sterile conditions. PMID:26295411
An implementation of differential evolution algorithm for inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Balkaya, Çağlayan
2013-11-01
Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.
Kamble, Prajakta P; Kore, Maheshkumar V; Patil, Sushama A; Jadhav, Jyoti P; Attar, Yasmin C
2018-06-01
Tithonia rotundifolia is an easily available and abundant inulin rich weed reported to be competitive and allelopathic. This weed inulin is hydrolyzed by inulinase into fructose. Response surface methodology was employed to optimize culture conditions for the inulinase production from Arthrobacter mysorens strain no.1 isolated from rhizospheric area of Tithonia weed. Initially, Plackett- Burman design was used for screening 11 nutritional parameters for inulinase production including inulin containing weeds as cost effective substrate. The experiment shows that amongst the 11 parameters studied, K 2 HPO 4 , Inulin, Agave sisalana extract and Tithonia rotundifolia were the most significant variables for inulinase production. Quantitative effects of these 4 factors were further investigated using Box Behnken design. The medium having 0.27% K 2 HPO 4 , 2.54% Inulin, 6.57% Agave sisalana extract and 7.27% Tithonia rotundifolia extract were found to be optimum for maximum inulinase production. The optimization strategies used showed 2.12 fold increase in inulinase yield (1669.45 EU/ml) compared to non-optimized medium (787 EU/ml). Fructose produced by the action of inulinase was further confirmed by spectrophotometer, osazone, HPTLC and FTIR methods. Thus Tithonia rotundifolia can be used as an eco-friendly, economically feasible and promising alternative substrate for commercial inulinase production yielding fructose from Arthrobacter mysorens strain no.1. Copyright © 2018 Elsevier B.V. All rights reserved.
A strategy to determine operating parameters in tissue engineering hollow fiber bioreactors
Shipley, RJ; Davidson, AJ; Chan, K; Chaudhuri, JB; Waters, SL; Ellis, MJ
2011-01-01
The development of tissue engineering hollow fiber bioreactors (HFB) requires the optimal design of the geometry and operation parameters of the system. This article provides a strategy for specifying operating conditions for the system based on mathematical models of oxygen delivery to the cell population. Analytical and numerical solutions of these models are developed based on Michaelis–Menten kinetics. Depending on the minimum oxygen concentration required to culture a functional cell population, together with the oxygen uptake kinetics, the strategy dictates the model needed to describe mass transport so that the operating conditions can be defined. If cmin ≫ Km we capture oxygen uptake using zero-order kinetics and proceed analytically. This enables operating equations to be developed that allow the user to choose the medium flow rate, lumen length, and ECS depth to provide a prescribed value of cmin. When , we use numerical techniques to solve full Michaelis–Menten kinetics and present operating data for the bioreactor. The strategy presented utilizes both analytical and numerical approaches and can be applied to any cell type with known oxygen transport properties and uptake kinetics. PMID:21370228
Strategy of restraining ripple error on surface for optical fabrication.
Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen
2014-09-10
The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8 nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.
Acoustic and elastic waveform inversion best practices
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
A Robust Design Methodology for Optimal Microscale Secondary Flow Control in Compact Inlet Diffusers
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Keller, Dennis J.
2001-01-01
It is the purpose of this study to develop an economical Robust design methodology for microscale secondary flow control in compact inlet diffusers. To illustrate the potential of economical Robust Design methodology, two different mission strategies were considered for the subject inlet, namely Maximum Performance and Maximum HCF Life Expectancy. The Maximum Performance mission maximized total pressure recovery while the Maximum HCF Life Expectancy mission minimized the mean of the first five Fourier harmonic amplitudes, i.e., 'collectively' reduced all the harmonic 1/2 amplitudes of engine face distortion. Each of the mission strategies was subject to a low engine face distortion constraint, i.e., DC60<0.10, which is a level acceptable for commercial engines. For each of these missions strategies, an 'Optimal Robust' (open loop control) and an 'Optimal Adaptive' (closed loop control) installation was designed over a twenty degree angle-of-incidence range. The Optimal Robust installation used economical Robust Design methodology to arrive at a single design which operated over the entire angle-of-incident range (open loop control). The Optimal Adaptive installation optimized all the design parameters at each angle-of-incidence. Thus, the Optimal Adaptive installation would require a closed loop control system to sense a proper signal for each effector and modify that effector device, whether mechanical or fluidic, for optimal inlet performance. In general, the performance differences between the Optimal Adaptive and Optimal Robust installation designs were found to be marginal. This suggests, however, that Optimal Robust open loop installation designs can be very competitive with Optimal Adaptive close loop designs. Secondary flow control in inlets is inherently robust, provided it is optimally designed. Therefore, the new methodology presented in this paper, combined array 'Lower Order' approach to Robust DOE, offers the aerodynamicist a very viable and economical way of exploring the concept of Robust inlet design, where the mission variables are brought directly into the inlet design process and insensitivity or robustness to the mission variables becomes a design objective.
Energy Expenditure of Trotting Gait Under Different Gait Parameters
NASA Astrophysics Data System (ADS)
Chen, Xian-Bao; Gao, Feng
2017-07-01
Robots driven by batteries are clean, quiet, and can work indoors or in space. However, the battery endurance is a great problem. A new gait parameter design energy saving strategy to extend the working hours of the quadruped robot is proposed. A dynamic model of the robot is established to estimate and analyze the energy expenditures during trotting. Given a trotting speed, optimal stride frequency and stride length can minimize the energy expenditure. However, the relationship between the speed and the optimal gait parameters is nonlinear, which is difficult for practical application. Therefore, a simplified gait parameter design method for energy saving is proposed. A critical trotting speed of the quadruped robot is found and can be used to decide the gait parameters. When the robot is travelling lower than this speed, it is better to keep a constant stride length and change the cycle period. When the robot is travelling higher than this speed, it is better to keep a constant cycle period and change the stride length. Simulations and experiments on the quadruped robot show that by using the proposed gait parameter design approach, the energy expenditure can be reduced by about 54% compared with the 100 mm stride length under 500 mm/s speed. In general, an energy expenditure model based on the gait parameter of the quadruped robot is built and the trotting gait parameters design approach for energy saving is proposed.
Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren
2017-11-01
Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.
A study of optimization techniques in HDR brachytherapy for the prostate
NASA Astrophysics Data System (ADS)
Pokharel, Ghana Shyam
Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.
NASA Astrophysics Data System (ADS)
Zhu, Zhengfan; Gan, Qingbo; Yang, Xin; Gao, Yang
2017-08-01
We have developed a novel continuation technique to solve optimal bang-bang control for low-thrust orbital transfers considering the first-order necessary optimality conditions derived from Lawden's primer vector theory. Continuation on the thrust amplitude is mainly described in this paper. Firstly, a finite-thrust transfer with an ;On-Off-On; thrusting sequence is modeled using a two-impulse transfer as initial solution, and then the thrust amplitude is decreased gradually to find an optimal solution with minimum thrust. Secondly, the thrust amplitude is continued from its minimum value to positive infinity to find the optimal bang-bang control, and a thrust switching principle is employed to determine the control structure by monitoring the variation of the switching function. In the continuation process, a bifurcation of bang-bang control is revealed and the concept of critical thrust is proposed to illustrate this phenomenon. The same thrust switching principle is also applicable to the continuation on other parameters, such as transfer time, orbital phase angle, etc. By this continuation technique, fuel-optimal orbital transfers with variable mission parameters can be found via an automated algorithm, and there is no need to provide an initial guess for the costate variables. Moreover, continuation is implemented in the solution space of bang-bang control that is either optimal or non-optimal, which shows that a desired solution of bang-bang control is obtained via continuation on a single parameter starting from an existing solution of bang-bang control. Finally, numerical examples are presented to demonstrate the effectiveness of the proposed continuation technique. Specifically, this continuation technique provides an approach to find multiple solutions satisfying the first-order necessary optimality conditions to the same orbital transfer problem, and a continuation strategy is presented as a preliminary approach for solving the bang-bang control of many-revolution orbital transfers.
Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar
2015-06-01
Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Leclercq, Amélie; Nonell, Anthony; Todolí Torró, José Luis; Bresson, Carole; Vio, Laurent; Vercouter, Thomas; Chartier, Frédéric
2015-07-23
Inductively coupled plasma optical emission spectrometry (ICP-OES) and mass spectrometry (ICP-MS) are increasingly used to carry out analyses in organic/hydro-organic matrices. The introduction of such matrices into ICP sources is particularly challenging and can be the cause of numerous drawbacks. This tutorial review, divided in two parts, explores the rich literature related to the introduction of organic/hydro-organic matrices in ICP sources. Part I provided theoretical considerations associated with the physico-chemical properties of such matrices, in an attempt to understand the induced phenomena. Part II of this tutorial review is dedicated to more practical considerations on instrumentation, instrumental and operating parameters, as well as analytical strategies for elemental quantification in such matrices. Two important issues are addressed in this part: the first concerns the instrumentation and optimization of instrumental and operating parameters, pointing out (i) the description, benefits and drawbacks of different kinds of nebulization and desolvation devices and the impact of more specific instrumental parameters such as the injector characteristics and the material used for the cone; and, (ii) the optimization of operating parameters, for both ICP-OES and ICP-MS. Even if it is at the margin of this tutorial review, Electrothermal Vaporization and Laser Ablation will also be shortly described. The second issue is devoted to the analytical strategies for elemental quantification in such matrices, with particular insight into the isotope dilution technique, particularly used in speciation analysis by ICP-coupled separation techniques. Copyright © 2015 Elsevier B.V. All rights reserved.
Panaceas, uncertainty, and the robust control framework in sustainability science
Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan
2007-01-01
A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574
Bi-objective optimization of a multiple-target active debris removal mission
NASA Astrophysics Data System (ADS)
Bérend, Nicolas; Olive, Xavier
2016-05-01
The increasing number of space debris in Low-Earth Orbit (LEO) raises the question of future Active Debris Removal (ADR) operations. Typical ADR scenarios rely on an Orbital Transfer Vehicle (OTV) using one of the two following disposal strategies: the first one consists in attaching a deorbiting kit, such as a solid rocket booster, to the debris after rendezvous; with the second one, the OTV captures the debris and moves it to a low-perigee disposal orbit. For multiple-target ADR scenarios, the design of such a mission is very complex, as it involves two optimization levels: one for the space debris sequence, and a second one for the "elementary" orbit transfer strategy from a released debris to the next one in the sequence. This problem can be seen as a Time-Dependant Traveling Salesman Problem (TDTSP) with two objective functions to minimize: the total mission duration and the total propellant consumption. In order to efficiently solve this problem, ONERA has designed, under CNES contract, TOPAS (Tool for Optimal Planning of ADR Sequence), a tool that implements a Branch & Bound method developed in previous work together with a dedicated algorithm for optimizing the "elementary" orbit transfer. A single run of this tool yields an estimation of the Pareto front of the problem, which exhibits the trade-off between mission duration and propellant consumption. We first detail our solution to cope with the combinatorial explosion of complex ADR scenarios with 10 debris. The key point of this approach is to define the orbit transfer strategy through a small set of parameters, allowing an acceptable compromise between the quality of the optimum solution and the calculation cost. Then we present optimization results obtained for various 10 debris removal scenarios involving a 15-ton OTV, using either the deorbiting kit or the disposal orbit strategy. We show that the advantage of one strategy upon the other depends on the propellant margin, the maximum duration allowed for the mission and the orbit inclination domain. For high inclination orbits near 98 deg, the disposal orbit strategy is more appropriate for short duration missions, while the deorbiting kit strategy ensures a better propellant margin. Conversely, for lower inclination orbits near 65 deg, the deorbiting kit strategy appears to be the only possible with a 10 debris set. We eventually explain the consistency of these results with regards to astrodynamics.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
Coach simplified structure modeling and optimization study based on the PBM method
NASA Astrophysics Data System (ADS)
Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian
2016-09-01
For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the Pareto solution set is acquired, and the selection strategy of the final solution is discussed. The case study demonstrates that the mechanical performances of the structure can be well-modeled and simulated by PBM beam. Because of the merits of fewer parameters and convenience of use, this method is suitable to be applied in the concept stage. Another merit is that the optimization results are the requirements for the mechanical performance of the beam section instead of those of the shape and dimensions, bringing flexibility to the succeeding design.
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
Least-Squares Self-Calibration of Imaging Array Data
NASA Technical Reports Server (NTRS)
Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.
2004-01-01
When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.
Stochastic optimization of GeantV code by use of genetic algorithms
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Stochastic optimization of GeantV code by use of genetic algorithms
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Stochastic optimization of GeantV code by use of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.
Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang
2017-11-01
This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.
Optimization of vascular-targeting drugs in a computational model of tumor growth
NASA Astrophysics Data System (ADS)
Gevertz, Jana
2012-04-01
A biophysical tool is introduced that seeks to provide a theoretical basis for helping drug design teams assess the most promising drug targets and design optimal treatment strategies. The tool is grounded in a previously validated computational model of the feedback that occurs between a growing tumor and the evolving vasculature. In this paper, the model is particularly used to explore the therapeutic effectiveness of two drugs that target the tumor vasculature: angiogenesis inhibitors (AIs) and vascular disrupting agents (VDAs). Using sensitivity analyses, the impact of VDA dosing parameters is explored, as is the effects of administering a VDA with an AI. Further, a stochastic optimization scheme is utilized to identify an optimal dosing schedule for treatment with an AI and a chemotherapeutic. The treatment regimen identified can successfully halt simulated tumor growth, even after the cessation of therapy.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
A Sensitivity Analysis of Tsunami Inversions on the Number of Stations
NASA Astrophysics Data System (ADS)
An, Chao; Liu, Philip L.-F.; Meng, Lingsen
2018-05-01
Current finite-fault inversions of tsunami recordings generally adopt as many tsunami stations as possible to better constrain earthquake source parameters. In this study, inversions are evaluated by the waveform residual that measures the difference between model predictions and recordings, and the dependence of the quality of inversions on the number tsunami stations is derived. Results for the 2011 Tohoku event show that, if the tsunami stations are optimally located, the waveform residual decreases significantly with the number of stations when the number is 1 ˜ 4 and remains almost constant when the number is larger than 4, indicating that 2 ˜ 4 stations are able to recover the main characteristics of the earthquake source. The optimal location of tsunami stations is explained in the text. Similar analysis is applied to the Manila Trench in the South China Sea using artificially generated earthquakes and virtual tsunami stations. Results confirm that 2 ˜ 4 stations are necessary and sufficient to constrain the earthquake source parameters, and the optimal sites of stations are recommended in the text. The conclusion is useful for the design of new tsunami warning systems. Current strategies of tsunameter network design mainly focus on the early detection of tsunami waves from potential sources to coastal regions. We therefore recommend that, in addition to the current strategies, the waveform residual could also be taken into consideration so as to minimize the error of tsunami wave prediction for warning purposes.
Chang, Herng-Hua; Chang, Yu-Ning
2017-04-01
Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.
Machine Learning Force Field Parameters from Ab Initio Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ying; Li, Hui; Pickard, Frank C.
Machine learning (ML) techniques with the genetic algorithm (GA) have been applied to determine a polarizable force field parameters using only ab initio data from quantum mechanics (QM) calculations of molecular clusters at the MP2/6-31G(d,p), DFMP2(fc)/jul-cc-pVDZ, and DFMP2(fc)/jul-cc-pVTZ levels to predict experimental condensed phase properties (i.e., density and heat of vaporization). The performance of this ML/GA approach is demonstrated on 4943 dimer electrostatic potentials and 1250 cluster interaction energies for methanol. Excellent agreement between the training data set from QM calculations and the optimized force field model was achieved. The results were further improved by introducing an offset factor duringmore » the machine learning process to compensate for the discrepancy between the QM calculated energy and the energy reproduced by optimized force field, while maintaining the local “shape” of the QM energy surface. Throughout the machine learning process, experimental observables were not involved in the objective function, but were only used for model validation. The best model, optimized from the QM data at the DFMP2(fc)/jul-cc-pVTZ level, appears to perform even better than the original AMOEBA force field (amoeba09.prm), which was optimized empirically to match liquid properties. The present effort shows the possibility of using machine learning techniques to develop descriptive polarizable force field using only QM data. The ML/GA strategy to optimize force fields parameters described here could easily be extended to other molecular systems.« less
Prakash, Punit; Salgaonkar, Vasant A.; Diederich, Chris J.
2014-01-01
Endoluminal and catheter-based ultrasound applicators are currently under development and are in clinical use for minimally invasive hyperthermia and thermal ablation of various tissue targets. Computational models play a critical role in in device design and optimization, assessment of therapeutic feasibility and safety, devising treatment monitoring and feedback control strategies, and performing patient-specific treatment planning with this technology. The critical aspects of theoretical modeling, applied specifically to endoluminal and interstitial ultrasound thermotherapy, are reviewed. Principles and practical techniques for modeling acoustic energy deposition, bioheat transfer, thermal tissue damage, and dynamic changes in the physical and physiological state of tissue are reviewed. The integration of these models and applications of simulation techniques in identification of device design parameters, development of real time feedback-control platforms, assessing the quality and safety of treatment delivery strategies, and optimization of inverse treatment plans are presented. PMID:23738697
Scanning laser ophthalmoscopy: optimized testing strategies for psychophysics
NASA Astrophysics Data System (ADS)
Van de Velde, Frans J.
1996-12-01
Retinal function can be evaluated with the scanning laser ophthalmoscope (SLO). the main advantage is a precise localization of the psychophysical stimulus on the retina. Four alternative forced choice (4AFC) and parameter estimation by sequential testing (PEST) are classic adaptive algorithms that have been optimized for use with the SLO, and combined with strategies to correct for small eye movements. Efficient calibration procedures are essential for quantitative microperimetry. These techniques measure precisely visual acuity and retinal sensitivity at distinct locations on the retina. A combined 632 nm and IR Maxwellian view illumination provides a maximal transmittance through the ocular media and has a animal interference with xanthophyll or hemoglobin. Future modifications of the instrument include the possibility of binocular evaluation, Maxwellian view control, fundus tracking using normalized gray-scale correlation, and microphotocoagulation. The techniques are useful in low vision rehabilitation and the application of laser to the retina.
Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang
2017-06-01
To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
Trajectory Design Strategies for the NGST L2 Libration Point Mission
NASA Technical Reports Server (NTRS)
Folta, David; Cooley, Steven; Howell, Kathleen; Bauer, Frank H.
2001-01-01
The Origins' Next Generation Space Telescope (NGST) trajectory design is addressed in light of improved methods for attaining constrained orbit parameters and their control at the exterior collinear libration point, L2. The use of a dynamical systems approach, state-space equations for initial libration orbit control, and optimization to achieve constrained orbit parameters are emphasized. The NGST trajectory design encompasses a direct transfer and orbit maintenance under a constant acceleration. A dynamical systems approach can be used to provide a biased orbit and stationkeeping maintenance method that incorporates the constraint of a single axis correction scheme.
Chang, Yao-Jen; Chu, Chien-Wei; Lin, Min-Der
2012-05-01
Municipal solid waste management (MSWM) is an important environmental challenge and subject in urban planning. For sustainable MSWM strategies, the critical management factors to be considered include not only economic efficiency of MSW treatment but also life-cycle assessment of the environmental impact. This paper employed linear programming technique to establish optimal MSWM strategies considering economic efficiency and the air pollutant emissions during the life cycle of a MSWM system, and investigated the correlations between the economical optimization and pollutant emissions. A case study based on real-world MSW operating parameters in Taichung City is also presented. The results showed that the costs, benefits, streams of MSW, and throughputs of incinerators and landfills will be affected if pollution emission reductions are implemented in the MSWM strategies. In addition, the quantity of particulate matter is the best pollutant indicator for the MSWM system performance of emission reduction. In particular this model will assist the decision maker in drawing up a friendly MSWM strategy for Taichung City in Taiwan. Recently, life-cycle assessments of municipal solid waste management (MSWM) strategies have been given more considerations. However, what seems to be lacking is the consideration of economic factors and environmental impacts simultaneously. This work analyzed real-world data to establish optimal MSWM strategies considering economic efficiency and the air pollutant emissions during the life cycle of the MSWM system. The results indicated that the consideration of environmental impacts will affect the costs, benefits, streams of MSW, and throughputs of incinerators and landfills. This work is relevant to public discussion and may establish useful guidelines for the MSWM policies.
Tamhane, Ashish A; Arfanakis, Konstantinos
2009-07-01
Periodically-rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) and Turboprop MRI are characterized by greatly reduced sensitivity to motion, compared to their predecessors, fast spin-echo (FSE) and gradient and spin-echo (GRASE), respectively. This is due to the inherent self-navigation and motion correction of PROPELLER-based techniques. However, it is unknown how various acquisition parameters that determine k-space sampling affect the accuracy of motion correction in PROPELLER and Turboprop MRI. The goal of this work was to evaluate the accuracy of motion correction in both techniques, to identify an optimal rotation correction approach, and determine acquisition strategies for optimal motion correction. It was demonstrated that blades with multiple lines allow more accurate estimation of motion than blades with fewer lines. Also, it was shown that Turboprop MRI is less sensitive to motion than PROPELLER. Furthermore, it was demonstrated that the number of blades does not significantly affect motion correction. Finally, clinically appropriate acquisition strategies that optimize motion correction are discussed for PROPELLER and Turboprop MRI. (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.
2015-03-01
A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.
An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.
Yang, Yifei; Tan, Minjia; Dai, Yuewei
2017-01-01
A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2003-01-01
A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
Kaur, Guneet; Srivastava, Ashok K; Chand, Subhash
2012-09-01
1,3-propanediol (1,3-PD) is a chemical compound of immense importance primarily used as a raw material for fiber and textile industry. It can be produced by the fermentation of glycerol available abundantly as a by-product from the biodiesel plant. The present study was aimed at determination of key kinetic parameters of 1,3-PD fermentation by Clostridium diolis. Initial experiments on microbial growth inhibition were followed by optimization of nutrient medium recipe by statistical means. Batch kinetic data from studies in bioreactor using optimum concentration of variables obtained from statistical medium design was used for estimation of kinetic parameters of 1,3-PD production. Direct use of raw glycerol from biodiesel plant without any pre-treatment for 1,3-PD production using this strain investigated for the first time in this work gave results comparable to commercial glycerol. The parameter values obtained in this study would be used to develop a mathematical model for 1,3-PD to be used as a guide for designing various reactor operating strategies for further improving 1,3-PD production. An outline of protocol for model development has been discussed in the present work.
Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-01-01
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850
Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-11-25
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.
Complex motion measurement using genetic algorithm
NASA Astrophysics Data System (ADS)
Shen, Jianjun; Tu, Dan; Shen, Zhenkang
1997-12-01
Genetic algorithm (GA) is an optimization technique that provides an untraditional approach to deal with many nonlinear, complicated problems. The notion of motion measurement using genetic algorithm arises from the fact that the motion measurement is virtually an optimization process based on some criterions. In the paper, we propose a complex motion measurement method using genetic algorithm based on block-matching criterion. The following three problems are mainly discussed and solved in the paper: (1) apply an adaptive method to modify the control parameters of GA that are critical to itself, and offer an elitism strategy at the same time (2) derive an evaluate function of motion measurement for GA based on block-matching technique (3) employ hill-climbing (HC) method hybridly to assist GA's search for the global optimal solution. Some other related problems are also discussed. At the end of paper, experiments result is listed. We employ six motion parameters for measurement in our experiments. Experiments result shows that the performance of our GA is good. The GA can find the object motion accurately and rapidly.
Minimization of energy and surface roughness of the products machined by milling
NASA Astrophysics Data System (ADS)
Belloufi, A.; Abdelkrim, M.; Bouakba, M.; Rezgui, I.
2017-08-01
Metal cutting represents a large portion in the manufacturing industries, which makes this process the largest consumer of energy. Energy consumption is an indirect source of carbon footprint, we know that CO2 emissions come from the production of energy. Therefore high energy consumption requires a large production, which leads to high cost and a large amount of CO2 emissions. At this day, a lot of researches done on the Metal cutting, but the environmental problems of the processes are rarely discussed. The right selection of cutting parameters is an effective method to reduce energy consumption because of the direct relationship between energy consumption and cutting parameters in machining processes. Therefore, one of the objectives of this research is to propose an optimization strategy suitable for machining processes (milling) to achieve the optimum cutting conditions based on the criterion of the energy consumed during the milling. In this paper the problem of energy consumed in milling is solved by an optimization method chosen. The optimization is done according to the different requirements in the process of roughing and finishing under various technological constraints.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Tajabadi, Naser; Ebrahimpour, Afshin; Baradaran, Ali; Rahim, Raha Abdul; Mahyudin, Nor Ainy; Manap, Mohd Yazid Abdul; Bakar, Fatimah Abu; Saari, Nazamid
2015-04-15
Dominant strains of lactic acid bacteria (LAB) isolated from honey bees were evaluated for their γ-aminobutyric acid (GABA)-producing ability. Out of 24 strains, strain Taj-Apis362 showed the highest GABA-producing ability (1.76 mM) in MRS broth containing 50 mM initial glutamic acid cultured for 60 h. Effects of fermentation parameters, including initial glutamic acid level, culture temperature, initial pH and incubation time on GABA production were investigated via a single parameter optimization strategy. The optimal fermentation condition for GABA production was modeled using response surface methodology (RSM). The results showed that the culture temperature was the most significant factor for GABA production. The optimum conditions for maximum GABA production by Lactobacillus plantarum Taj-Apis362 were an initial glutamic acid concentration of 497.97 mM, culture temperature of 36 °C, initial pH of 5.31 and incubation time of 60 h, which produced 7.15 mM of GABA. The value is comparable with the predicted value of 7.21 mM.
Probabilistic Description of the Hydrologic Risk in Agriculture
NASA Astrophysics Data System (ADS)
Vico, G.; Porporato, A. M.
2011-12-01
Supplemental irrigation represents one of the main strategies to mitigate the effects of climatic variability on agroecosystems productivity and profitability, at the expenses of increasing water requirements for irrigation purposes. Optimizing water allocation for crop yield preservation and sustainable development needs to account for hydro-climatic variability, which is by far the main source of uncertainty affecting crop yields and irrigation water requirements. In this contribution, a widely applicable probabilistic framework is proposed to quantitatively define the hydrologic risk of yield reduction for both rainfed and irrigated agriculture. The occurrence of rainfall events and irrigation applications are linked probabilistically to crop development during the growing season. Based on these linkages, long-term and real-time yield reduction risk indices are defined as a function of climate, soil and crop parameters, as well as irrigation strategy. The former risk index is suitable for long-term irrigation strategy assessment and investment planning, while the latter risk index provides a rigorous probabilistic quantification of the emergence of drought conditions during a single growing season. This probabilistic framework allows also assessing the impact of limited water availability on crop yield, thus guiding the optimal allocation of water resources for human and environmental needs. Our approach employs relatively few parameters and is thus easily and broadly applicable to different crops and sites, under current and future climate scenarios, thus facilitating the assessment of the impact of increasingly frequent water shortages on agricultural productivity, profitability, and sustainability.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Optimising reef-scale CO2 removal by seaweed to buffer ocean acidification
NASA Astrophysics Data System (ADS)
Mongin, Mathieu; Baird, Mark E.; Hadley, Scott; Lenton, Andrew
2016-03-01
The equilibration of rising atmospheric {{CO}}2 with the ocean is lowering {pH} in tropical waters by about 0.01 every decade. Coral reefs and the ecosystems they support are regarded as one of the most vulnerable ecosystems to ocean acidification, threatening their long-term viability. In response to this threat, different strategies for buffering the impact of ocean acidification have been proposed. As the {pH} experienced by individual corals on a natural reef system depends on many processes over different time scales, the efficacy of these buffering strategies remains largely unknown. Here we assess the feasibility and potential efficacy of a reef-scale (a few kilometers) carbon removal strategy, through the addition of seaweed (fleshy multicellular algae) farms within the Great Barrier Reef at the Heron Island reef. First, using diagnostic time-dependent age tracers in a hydrodynamic model, we determine the optimal location and size of the seaweed farm. Secondly, we analytically calculate the optimal density of the seaweed and harvesting strategy, finding, for the seaweed growth parameters used, a biomass of 42 g N m-2 with a harvesting rate of up 3.2 g N m-2 d-1 maximises the carbon sequestration and removal. Numerical experiments show that an optimally located 1.9 km2 farm and optimally harvested seaweed (removing biomass above 42 g N m-2 every 7 d) increased aragonite saturation by 0.1 over 24 km2 of the Heron Island reef. Thus, the most effective seaweed farm can only delay the impacts of global ocean acidification at the reef scale by 7-21 years, depending on future global carbon emissions. Our results highlight that only a kilometer-scale farm can partially mitigate global ocean acidification for a particular reef.
Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method
NASA Astrophysics Data System (ADS)
Zhang, Jenmy Zimi
This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.
NASA Astrophysics Data System (ADS)
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft
NASA Astrophysics Data System (ADS)
Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei
2018-05-01
Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).
NASA Astrophysics Data System (ADS)
Mebrahitom, A.; Rizuan, D.; Azmir, M.; Nassif, M.
2016-02-01
High speed milling is one of the recent technologies used to produce mould inserts due to the need for high surface finish. It is a faster machining process where it uses a small side step and a small down step combined with very high spindle speed and feed rate. In order to effectively use the HSM capabilities, optimizing the tool path strategies and machining parameters is an important issue. In this paper, six different tool path strategies have been investigated on the surface finish and machining time of a rectangular cavities of ESR Stavax material. CAD/CAM application of CATIA V5 machining module for pocket milling of the cavities was used for process planning.
Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions
NASA Astrophysics Data System (ADS)
Ilgen, Marc R.
This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
Vyska, Martin; Cunniffe, Nik; Gilligan, Christopher
2016-10-01
The deployment of crop varieties that are partially resistant to plant pathogens is an important method of disease control. However, a trade-off may occur between the benefits of planting the resistant variety and a yield penalty, whereby the standard susceptible variety outyields the resistant one in the absence of disease. This presents a dilemma: deploying the resistant variety is advisable only if the disease occurs and is sufficient for the resistant variety to outyield the infected standard variety. Additionally, planting the resistant variety carries with it a further advantage in that the resistant variety reduces the probability of disease invading. Therefore, viewed from the perspective of a grower community, there is likely to be an optimal trade-off and thus an optimal cropping density for the resistant variety. We introduce a simple stochastic, epidemiological model to investigate the trade-off and the consequences for crop yield. Focusing on susceptible-infected-removed epidemic dynamics, we use the final size equation to calculate the surviving host population in order to analyse the yield, an approach suitable for rapid epidemics in agricultural crops. We identify a single compound parameter, which we call the efficacy of resistance and which incorporates the changes in susceptibility, infectivity and durability of the resistant variety. We use the compound parameter to inform policy plots that can be used to identify the optimal strategy for given parameter values when an outbreak is certain. When the outbreak is uncertain, we show that for some parameter values planting the resistant variety is optimal even when it would not be during the outbreak. This is because the resistant variety reduces the probability of an outbreak occurring. © 2016 The Author(s).
Analysis and design of a genetic circuit for dynamic metabolic engineering.
Anesiadis, Nikolaos; Kobayashi, Hideki; Cluett, William R; Mahadevan, Radhakrishnan
2013-08-16
Recent advances in synthetic biology have equipped us with new tools for bioprocess optimization at the genetic level. Previously, we have presented an integrated in silico design for the dynamic control of gene expression based on a density-sensing unit and a genetic toggle switch. In the present paper, analysis of a serine-producing Escherichia coli mutant shows that an instantaneous ON-OFF switch leads to a maximum theoretical productivity improvement of 29.6% compared to the mutant. To further the design, global sensitivity analysis is applied here to a mathematical model of serine production in E. coli coupled with a genetic circuit. The model of the quorum sensing and the toggle switch involves 13 parameters of which 3 are identified as having a significant effect on serine concentration. Simulations conducted in this reduced parameter space further identified the optimal ranges for these 3 key parameters to achieve productivity values close to the maximum theoretical values. This analysis can now be used to guide the experimental implementation of a dynamic metabolic engineering strategy and reduce the time required to design the genetic circuit components.
Cyber-Physical Attacks With Control Objectives
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-08-18
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Núñez, Eutimio Gustavo Fernández; Faintuch, Bluma Linkowski; Teodoro, Rodrigo; Wiecek, Danielle Pereira; da Silva, Natanael Gomes; Papadopoulos, Minas; Pelecanou, Maria; Pirmettis, Ioannis; de Oliveira Filho, Renato Santos; Duatti, Adriano; Pasqualini, Roberto
2011-04-01
The objective of this study was the development of a statistical approach for radiolabeling optimization of cysteine-dextran conjugates with Tc-99m tricarbonyl core. This strategy has been applied to the labeling of 2-propylene-S-cysteine-dextran in the attempt to prepare a new class of tracers for sentinel lymph node detection, and can be extended to other radiopharmaceuticals for different targets. The statistical routine was based on three-level factorial design. Best labeling conditions were achieved. The specific activity reached was 5 MBq/μg. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
Cyber-Physical Attacks With Control Objectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
NASA Astrophysics Data System (ADS)
Tavakoli, A.; Naeini, H. Moslemi; Roohi, Amir H.; Gollo, M. Hoseinpour; Shahabad, Sh. Imani
2018-01-01
In the 3D laser forming process, developing an appropriate laser scan pattern for producing specimens with high quality and uniformity is critical. This study presents certain principles for developing scan paths. Seven scan path parameters are considered, including: (1) combined linear or curved path; (2) type of combined linear path; (3) order of scan sequences; (4) the position of the start point in each scan; (5) continuous or discontinuous scan path; (6) direction of scan path; and (7) angular arrangement of combined linear scan paths. Regarding these path parameters, ten combined linear scan patterns are presented. Numerical simulations show continuous hexagonal, scan pattern, scanning from outer to inner path, is the optimized. In addition, it is observed the position of the start point and the angular arrangement of scan paths is the most effective path parameters. Also, further experimentations show four sequences due to creat symmetric condition enhance the height of the bowl-shaped products and uniformity. Finally, the optimized hexagonal pattern was compared with the similar circular one. In the hexagonal scan path, distortion value and standard deviation rather to edge height of formed specimen is very low, and the edge height despite of decreasing length of scan path increases significantly compared to the circular scan path. As a result, four-sequence hexagonal scan pattern is proposed as the optimized perimeter scan path to produce bowl-shaped product.
Naghibi Beidokhti, Hamid; Janssen, Dennis; van de Groes, Sebastiaan; Hazrati, Javad; Van den Boogaard, Ton; Verdonschot, Nico
2017-12-08
In finite element (FE) models knee ligaments can represented either by a group of one-dimensional springs, or by three-dimensional continuum elements based on segmentations. Continuum models closer approximate the anatomy, and facilitate ligament wrapping, while spring models are computationally less expensive. The mechanical properties of ligaments can be based on literature, or adjusted specifically for the subject. In the current study we investigated the effect of ligament modelling strategy on the predictive capability of FE models of the human knee joint. The effect of literature-based versus specimen-specific optimized material parameters was evaluated. Experiments were performed on three human cadaver knees, which were modelled in FE models with ligaments represented either using springs, or using continuum representations. In spring representation collateral ligaments were each modelled with three and cruciate ligaments with two single-element bundles. Stiffness parameters and pre-strains were optimized based on laxity tests for both approaches. Validation experiments were conducted to evaluate the outcomes of the FE models. Models (both spring and continuum) with subject-specific properties improved the predicted kinematics and contact outcome parameters. Models incorporating literature-based parameters, and particularly the spring models (with the representations implemented in this study), led to relatively high errors in kinematics and contact pressures. Using a continuum modelling approach resulted in more accurate contact outcome variables than the spring representation with two (cruciate ligaments) and three (collateral ligaments) single-element-bundle representations. However, when the prediction of joint kinematics is of main interest, spring ligament models provide a faster option with acceptable outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike
2017-01-01
During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
Optimal growth trajectories with finite carrying capacity.
Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
Optimal pricing policies for services with consideration of facility maintenance costs
NASA Astrophysics Data System (ADS)
Yeh, Ruey Huei; Lin, Yi-Fang
2012-06-01
For survival and success, pricing is an essential issue for service firms. This article deals with the pricing strategies for services with substantial facility maintenance costs. For this purpose, a mathematical framework that incorporates service demand and facility deterioration is proposed to address the problem. The facility and customers constitute a service system driven by Poisson arrivals and exponential service times. A service demand with increasing price elasticity and a facility lifetime with strictly increasing failure rate are also adopted in modelling. By examining the bidirectional relationship between customer demand and facility deterioration in the profit model, the pricing policies of the service are investigated. Then analytical conditions of customer demand and facility lifetime are derived to achieve a unique optimal pricing policy. The comparative statics properties of the optimal policy are also explored. Finally, numerical examples are presented to illustrate the effects of parameter variations on the optimal pricing policy.
The optimization of total laboratory automation by simulation of a pull-strategy.
Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo
2015-01-01
Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.
Optimal growth trajectories with finite carrying capacity
NASA Astrophysics Data System (ADS)
Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
Design of a family of ring-core fibers for OAM transmission studies.
Brunet, Charles; Ung, Bora; Wang, Lixian; Messaddeq, Younès; LaRochelle, Sophie; Rusch, Leslie A
2015-04-20
We propose a family of ring-core fibers, designed for the transmission of OAM modes, that can be fabricated by drawing five different fibers from a single preform. This novel technique allows us to experimentally sweep design parameters and speed up the fiber design optimization process. Such a family of fibers could be used to examine system performance, but also facilitate understanding of parameter impact in the transition from design to fabrication. We present design parameters characterizing our fiber, and enumerate criteria to be satisfied. We determine targeted fiber dimensions and explain our strategy for examining a design family rather than a single fiber design. We simulate modal properties of the designed fibers, and compare the results with measurements performed on fabricated fibers.
Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Christopher H
Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less
Sparsely sampling the sky: a Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Paykari, P.; Jaffe, A. H.
2013-08-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.
Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems
Chen, Sanfeng; Li, Shuai; Liu, Bo; Lou, Yuesheng; Liang, Yongsheng
2012-01-01
Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method. PMID:22778633
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiau, Huai-Suen; Zenyuk, Iryna V.; Weber, Adam Z.
Water management is a serious concern for alkaline-exchange-membrane fuel cells (AEMFCs) because water is a reactant in the alkaline oxygen-reduction reaction and hydroxide conduction in alkaline-exchange membranes is highly hydration dependent. Here in this article, we develop and use a multiphysics, multiphase model to explore water management in AEMFCs. We demonstrate that the low performance is mostly caused by extremely non-uniform distribution of water in the ionomer phase. A sensitivity analysis of design parameters including humidification strategies, membrane properties, and water transport resistance was undertaken to explore possible optimization strategies. Furthermore, the strategy and issues of reducing bicarbonate/carbonate buildup inmore » the membrane-electrode assembly with CO 2 from air is demonstrated based on the model prediction. Overall, mathematical modeling is used to explore trends and strategies to overcome performance bottlenecks and help enable AEMFC commercialization.« less
Computer-intensive simulation of solid-state NMR experiments using SIMPSON.
Tošner, Zdeněk; Andersen, Rasmus; Stevensson, Baltzar; Edén, Mattias; Nielsen, Niels Chr; Vosegaard, Thomas
2014-09-01
Conducting large-scale solid-state NMR simulations requires fast computer software potentially in combination with efficient computational resources to complete within a reasonable time frame. Such simulations may involve large spin systems, multiple-parameter fitting of experimental spectra, or multiple-pulse experiment design using parameter scan, non-linear optimization, or optimal control procedures. To efficiently accommodate such simulations, we here present an improved version of the widely distributed open-source SIMPSON NMR simulation software package adapted to contemporary high performance hardware setups. The software is optimized for fast performance on standard stand-alone computers, multi-core processors, and large clusters of identical nodes. We describe the novel features for fast computation including internal matrix manipulations, propagator setups and acquisition strategies. For efficient calculation of powder averages, we implemented interpolation method of Alderman, Solum, and Grant, as well as recently introduced fast Wigner transform interpolation technique. The potential of the optimal control toolbox is greatly enhanced by higher precision gradients in combination with the efficient optimization algorithm known as limited memory Broyden-Fletcher-Goldfarb-Shanno. In addition, advanced parallelization can be used in all types of calculations, providing significant time reductions. SIMPSON is thus reflecting current knowledge in the field of numerical simulations of solid-state NMR experiments. The efficiency and novel features are demonstrated on the representative simulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Adaptive Multi-Agent Systems for Constrained Optimization
NASA Technical Reports Server (NTRS)
Macready, William; Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.
Milando, Chad W; Martenies, Sheena E; Batterman, Stuart A
2016-09-01
In air quality management, reducing emissions from pollutant sources often forms the primary response to attaining air quality standards and guidelines. Despite the broad success of air quality management in the US, challenges remain. As examples: allocating emissions reductions among multiple sources is complex and can require many rounds of negotiation; health impacts associated with emissions, the ultimate driver for the standards, are not explicitly assessed; and long dispersion model run-times, which result from the increasing size and complexity of model inputs, limit the number of scenarios that can be evaluated, thus increasing the likelihood of missing an optimal strategy. A new modeling framework, called the "Framework for Rapid Emissions Scenario and Health impact ESTimation" (FRESH-EST), is presented to respond to these challenges. FRESH-EST estimates concentrations and health impacts of alternative emissions scenarios at the urban scale, providing efficient computations from emissions to health impacts at the Census block or other desired spatial scale. In addition, FRESH-EST can optimize emission reductions to meet specified environmental and health constraints, and a convenient user interface and graphical displays are provided to facilitate scenario evaluation. The new framework is demonstrated in an SO2 non-attainment area in southeast Michigan with two optimization strategies: the first minimizes emission reductions needed to achieve a target concentration; the second minimizes concentrations while holding constant the cumulative emissions across local sources (e.g., an emissions floor). The optimized strategies match outcomes in the proposed SO2 State Implementation Plan without the proposed stack parameter modifications or shutdowns. In addition, the lower health impacts estimated for these strategies suggest that FRESH-EST could be used to identify potentially more desirable pollution control alternatives in air quality management planning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907
Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .
NASA Astrophysics Data System (ADS)
Zheng, Yingying
The growing energy demands and needs for reducing carbon emissions call more and more attention to the development of renewable energy technologies and management strategies. Microgrids have been developed around the world as a means to address the high penetration level of renewable generation and reduce greenhouse gas emissions while attempting to address supply-demand balancing at a more local level. This dissertation presents a model developed to optimize the design of a biomass-integrated renewable energy microgrid employing combined heat and power with energy storage. A receding horizon optimization with Monte Carlo simulation were used to evaluate optimal microgrid design and dispatch under uncertainties in the renewable energy and utility grid energy supplies, the energy demands, and the economic assumptions so as to generate a probability density function for the cost of energy. Case studies were examined for a conceptual utility grid-connected microgrid application in Davis, California. The results provide the most cost effective design based on the assumed energy load profile, local climate data, utility tariff structure, and technical and financial performance of the various components of the microgrid. Sensitivity and uncertainty analyses are carried out to illuminate the key parameters that influence the energy costs. The model application provides a means to determine major risk factors associated with alternative design integration and operating strategies.
NASA Astrophysics Data System (ADS)
Cazzulani, Gabriele; Resta, Ferruccio; Ripamonti, Francesco
2012-04-01
During the last years, more and more mechanical applications saw the introduction of active control strategies. In particular, the need of improving the performances and/or the system health is very often associated to vibration suppression. This goal can be achieved considering both passive and active solutions. In this sense, many active control strategies have been developed, such as the Independent Modal Space Control (IMSC) or the resonant controllers (PPF, IRC, . . .). In all these cases, in order to tune and optimize the control strategy, the knowledge of the system dynamic behaviour is very important and it can be achieved both considering a numerical model of the system or through an experimental identification process. Anyway, dealing with non-linear or time-varying systems, a tool able to online identify the system parameters becomes a key-point for the control logic synthesis. The aim of the present work is the definition of a real-time technique, based on ARMAX models, that estimates the system parameters starting from the measurements of piezoelectric sensors. These parameters are returned to the control logic, that automatically adapts itself to the system dynamics. The problem is numerically investigated considering a carbon-fiber plate model forced through a piezoelectric patch.
NASA Astrophysics Data System (ADS)
Gaillot, P.; Bardaine, T.; Lyon-Caen, H.
2004-12-01
Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro-seismicity data from glacier monitoring. The developed method is discussed and tested using our wavelet version of the standard STA-LTA algorithm.
Optimal control analysis of Ebola disease with control strategies of quarantine and vaccination.
Ahmad, Muhammad Dure; Usman, Muhammad; Khan, Adnan; Imran, Mudassar
2016-07-13
The 2014 Ebola epidemic is the largest in history, affecting multiple countries in West Africa. Some isolated cases were also observed in other regions of the world. In this paper, we introduce a deterministic SEIR type model with additional hospitalization, quarantine and vaccination components in order to understand the disease dynamics. Optimal control strategies, both in the case of hospitalization (with and without quarantine) and vaccination are used to predict the possible future outcome in terms of resource utilization for disease control and the effectiveness of vaccination on sick populations. Further, with the help of uncertainty and sensitivity analysis we also have identified the most sensitive parameters which effectively contribute to change the disease dynamics. We have performed mathematical analysis with numerical simulations and optimal control strategies on Ebola virus models. We used dynamical system tools with numerical simulations and optimal control strategies on our Ebola virus models. The original model, which allowed transmission of Ebola virus via human contact, was extended to include imperfect vaccination and quarantine. After the qualitative analysis of all three forms of Ebola model, numerical techniques, using MATLAB as a platform, were formulated and analyzed in detail. Our simulation results support the claims made in the qualitative section. Our model incorporates an important component of individuals with high risk level with exposure to disease, such as front line health care workers, family members of EVD patients and Individuals involved in burial of deceased EVD patients, rather than the general population in the affected areas. Our analysis suggests that in order for R 0 (i.e., the basic reproduction number) to be less than one, which is the basic requirement for the disease elimination, the transmission rate of isolated individuals should be less than one-fourth of that for non-isolated ones. Our analysis also predicts, we need high levels of medication and hospitalization at the beginning of an epidemic. Further, optimal control analysis of the model suggests the control strategies that may be adopted by public health authorities in order to reduce the impact of epidemics like Ebola.
The importance of functional form in optimal control solutions of problems in population dynamics
Runge, M.C.; Johnson, F.A.
2002-01-01
Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood profile may end up producing alternatives that do not differ as importantly as if different functional forms had been used. We recommend that biological knowledge be used to bracket a range of possible functional forms, and robustness of conclusions be checked over this range.
Performance Analysis of Hybrid Electric Vehicle over Different Driving Cycles
NASA Astrophysics Data System (ADS)
Panday, Aishwarya; Bansal, Hari Om
2017-02-01
Article aims to find the nature and response of a hybrid vehicle on various standard driving cycles. Road profile parameters play an important role in determining the fuel efficiency. Typical parameters of road profile can be reduced to a useful smaller set using principal component analysis and independent component analysis. Resultant data set obtained after size reduction may result in more appropriate and important parameter cluster. With reduced parameter set fuel economies over various driving cycles, are ranked using TOPSIS and VIKOR multi-criteria decision making methods. The ranking trend is then compared with the fuel economies achieved after driving the vehicle over respective roads. Control strategy responsible for power split is optimized using genetic algorithm. 1RC battery model and modified SOC estimation method are considered for the simulation and improved results compared with the default are obtained.
The value of compressed air energy storage in energy and reserve markets
Drury, Easan; Denholm, Paul; Sioshansi, Ramteen
2011-06-28
Storage devices can provide several grid services, however it is challenging to quantify the value of providing several services and to optimally allocate storage resources to maximize value. We develop a co-optimized Compressed Air Energy Storage (CAES) dispatch model to characterize the value of providing operating reserves in addition to energy arbitrage in several U.S. markets. We use the model to: (1) quantify the added value of providing operating reserves in addition to energy arbitrage; (2) evaluate the dynamic nature of optimally allocating storage resources into energy and reserve markets; and (3) quantify the sensitivity of CAES net revenues tomore » several design and performance parameters. We find that conventional CAES systems could earn an additional 23 ± 10/kW-yr by providing operating reserves, and adiabatic CAES systems could earn an additional 28 ± 13/kW-yr. We find that arbitrage-only revenues are unlikely to support a CAES investment in most market locations, but the addition of reserve revenues could support a conventional CAES investment in several markets. Adiabatic CAES revenues are not likely to support an investment in most regions studied. As a result, modifying CAES design and performance parameters primarily impacts arbitrage revenues, and optimizing CAES design will be nearly independent of dispatch strategy.« less
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations
NASA Astrophysics Data System (ADS)
Romanihin, S. M.; Tronin, I. V.
2016-09-01
We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.
An improved grey wolf optimizer algorithm for the inversion of geoelectrical data
NASA Astrophysics Data System (ADS)
Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui
2018-05-01
The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.
A Simplified Model of Choice Behavior under Uncertainty
Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu
2016-01-01
The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715
Nonlinear robust controller design for multi-robot systems with unknown payloads
NASA Technical Reports Server (NTRS)
Song, Y. D.; Anderson, J. N.; Homaifar, A.; Lai, H. Y.
1992-01-01
This work is concerned with the control problem of a multi-robot system handling a payload with unknown mass properties. Force constraints at the grasp points are considered. Robust control schemes are proposed that cope with the model uncertainty and achieve asymptotic path tracking. To deal with the force constraints, a strategy for optimally sharing the task is suggested. This strategy basically consists of two steps. The first detects the robots that need help and the second arranges that help. It is shown that the overall system is not only robust to uncertain payload parameters, but also satisfies the force constraints.
NASA Astrophysics Data System (ADS)
Wright, Jason T.
The discovery of exoplanets has both focused and expanded the search for extraterrestrial intelligence. The consideration of Earth as an exoplanet, the knowledge of the orbital parameters of individual exoplanets, and our new understanding of the prevalence of exoplanets throughout the galaxy have all altered the search strategies of communication SETI efforts, by inspiring new "Schelling points" (i.e. optimal search strategies for beacons). Future efforts to characterize individual planets photometrically and spectroscopically, with imaging and via transit, will also allow for searches for a variety of technosignatures on their surfaces, in their atmospheres, and in orbit around them. In the near-term, searches for new planetary systems might even turn up free-floating megastructures.
More efficient evolutionary strategies for model calibration with watershed model for demonstration
NASA Astrophysics Data System (ADS)
Baggett, J. S.; Skahill, B. E.
2008-12-01
Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.
IDH mutation assessment of glioma using texture features of multimodal MR images
NASA Astrophysics Data System (ADS)
Zhang, Xi; Tian, Qiang; Wu, Yu-Xia; Xu, Xiao-Pan; Li, Bao-Juan; Liu, Yi-Xiong; Liu, Yang; Lu, Hong-Bing
2017-03-01
Purpose: To 1) find effective texture features from multimodal MRI that can distinguish IDH mutant and wild status, and 2) propose a radiomic strategy for preoperatively detecting IDH mutation patients with glioma. Materials and Methods: 152 patients with glioma were retrospectively included from the Cancer Genome Atlas. Corresponding T1-weighted image before- and post-contrast, T2-weighted image and fluid-attenuation inversion recovery image from the Cancer Imaging Archive were analyzed. Specific statistical tests were applied to analyze the different kind of baseline information of LrGG patients. Finally, 168 texture features were derived from multimodal MRI per patient. Then the support vector machine-based recursive feature elimination (SVM-RFE) and classification strategy was adopted to find the optimal feature subset and build the identification models for detecting the IDH mutation. Results: Among 152 patients, 92 and 60 were confirmed to be IDH-wild and mutant, respectively. Statistical analysis showed that the patients without IDH mutation was significant older than patients with IDH mutation (p<0.01), and the distribution of some histological subtypes was significant different between IDH wild and mutant groups (p<0.01). After SVM-RFE, 15 optimal features were determined for IDH mutation detection. The accuracy, sensitivity, specificity, and AUC after SVM-RFE and parameter optimization were 82.2%, 85.0%, 78.3%, and 0.841, respectively. Conclusion: This study presented a radiomic strategy for noninvasively discriminating IDH mutation of patients with glioma. It effectively incorporated kinds of texture features from multimodal MRI, and SVM-based classification strategy. Results suggested that features selected from SVM-RFE were more potential to identifying IDH mutation. The proposed radiomics strategy could facilitate the clinical decision making in patients with glioma.
A systematic review on the composting of green waste: Feedstock quality and optimization strategies.
Reyes-Torres, M; Oviedo-Ocaña, E R; Dominguez, I; Komilis, D; Sánchez, A
2018-04-27
Green waste (GW) is an important fraction of municipal solid waste (MSW). The composting of lignocellulosic GW is challenging due to its low decomposition rate. Recently, an increasing number of studies that include strategies to optimize GW composting appeared in the literature. This literature review focuses on the physicochemical quality of GW and on the effect of strategies used to improve the process and product quality. A systematic search was carried out, using keywords, and 447 papers published between 2002 and 2018 were identified. After a screening process, 41 papers addressing feedstock quality and 32 papers on optimization strategies were selected to be reviewed and analyzed in detail. The GW composition is highly variable due to the diversity of the source materials, the type of vegetation, and climatic conditions. This variability limits a strict categorization of the GW physicochemical characteristics. However, this research established that the predominant features of GW are a C/N ratio higher than 25, a deficit in important nutrients, namely nitrogen (0.5-1.5% db), phosphorous (0.1-0.2% db) and potassium (0.4-0.8% db) and a high content of recalcitrant organic compounds (e.g. lignin). The promising strategies to improve composting of GW were: i) GW particle size reduction (e.g. shredding and separation of GW fractions); ii) addition of energy amendments (e.g. non-refined sugar, phosphate rock, food waste, volatile ashes), bulking materials (e.g. biocarbon, wood chips), or microbial inoculum (e.g. fungal consortia); and iii) variations in operating parameters (aeration, temperature, and two-phase composting). These alternatives have successfully led to the reduction of process length and have managed to transform recalcitrant substances to a high-quality end-product. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stabentheiner, Anton; Kovac, Helmut
2014-01-01
Heterothermic insects like honeybees, foraging in a variable environment, face the challenge of keeping their body temperature high to enable immediate flight and to promote fast exploitation of resources. Because of their small size they have to cope with an enormous heat loss and, therefore, high costs of thermoregulation. This calls for energetic optimisation which may be achieved by different strategies. An ‘economizing’ strategy would be to reduce energetic investment whenever possible, for example by using external heat from the sun for thermoregulation. An ‘investment-guided’ strategy, by contrast, would be to invest additional heat production or external heat gain to optimize physiological parameters like body temperature which promise increased energetic returns. Here we show how honeybees balance these strategies in response to changes of their local microclimate. In a novel approach of simultaneous measurement of respiration and body temperature foragers displayed a flexible strategy of thermoregulatory and energetic management. While foraging in shade on an artificial flower they did not save energy with increasing ambient temperature as expected but acted according to an ‘investment-guided’ strategy, keeping the energy turnover at a high level (∼56–69 mW). This increased thorax temperature and speeded up foraging as ambient temperature increased. Solar heat was invested to increase thorax temperature at low ambient temperature (‘investment-guided’ strategy) but to save energy at high temperature (‘economizing’ strategy), leading to energy savings per stay of ∼18–76% in sunshine. This flexible economic strategy minimized costs of foraging, and optimized energetic efficiency in response to broad variation of environmental conditions. PMID:25162211
COMSATCOM service technical baseline strategy development approach using PPBW concept
NASA Astrophysics Data System (ADS)
Nguyen, Tien M.; Guillen, Andy T.
2016-05-01
This paper presents an innovative approach to develop a Commercial Satellite Communications (COMSATCOM) service Technical Baseline (TB) and associated Program Baseline (PB) strategy using Portable Pool Bandwidth (PPBW) concept. The concept involves trading of the purchased commercial transponders' Bandwidths (BWs) with existing commercial satellites' bandwidths participated in a "designated pool bandwidth"3 according to agreed terms and conditions. Space Missile Systems Center (SMC) has been implementing the Better Buying Power (BBP 3.0) directive4 and recommending the System Program Offices (SPO) to own the Program and Technical Baseline (PTB) [1, 2] for the development of flexible acquisition strategy and achieving affordability and increased in competition. This paper defines and describes the critical PTB parameters and associated requirements that are important to the government SPO for "owning" an affordable COMSATCOM services contract using PPBW trading concept. The paper describes a step-by-step approach to optimally perform the PPBW trading to meet DoD and its stakeholders (i) affordability requirement, and (ii) fixed and variable bandwidth requirements by optimizing communications performance, cost and PPBW accessibility in terms of Quality of Services (QoS), Bandwidth Sharing Ratio (BSR), Committed Information Rate (CIR), Burstable Information Rate (BIR), Transponder equivalent bandwidth (TPE) and transponder Net Presence Value (NPV). The affordable optimal solution that meets variable bandwidth requirements will consider the operating and trading terms and conditions described in the Fair Access Policy (FAP).
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
NASA Astrophysics Data System (ADS)
Rodríguez-Escales, Paula; Folch, Albert; van Breukelen, Boris M.; Vidal-Gavilan, Georgina; Sanchez-Vila, Xavier
2016-07-01
Enhanced In situ Biodenitrification (EIB) is a capable technology for nitrate removal in subsurface water resources. Optimizing the performance of EIB implies devising an appropriate feeding strategy involving two design parameters: carbon injection frequency and C:N ratio of the organic substrate nitrate mixture. Here we model data on the spatial and temporal evolution of nitrate (up to 1.2 mM), organic carbon (ethanol), and biomass measured during a 342 day-long laboratory column experiment (published in Vidal-Gavilan et al., 2014). Effective porosity was 3% lower and dispersivity had a sevenfold increase at the end of the experiment as compared to those at the beginning. These changes in transport parameters were attributed to the development of a biofilm. A reactive transport model explored the EIB performance in response to daily and weekly feeding strategies. The latter resulted in significant temporal variation in nitrate and ethanol concentrations at the outlet of the column. On the contrary, a daily feeding strategy resulted in quite stable and low concentrations at the outlet and complete denitrification. At intermediate times (six months of experiment), it was possible to reduce the carbon load and consequently the C:N ratio (from 2.5 to 1), partly because biomass decay acted as endogenous carbon to respiration, keeping the denitrification rates, and partly due to the induced dispersivity caused by the well-developed biofilm, resulting in enhancement of mixing between the ethanol and nitrate and the corresponding improvement of denitrification rates. The inclusion of a dual-domain model improved the fit at the last days of the experiment as well as in the tracer test performed at day 342, demonstrating a potential transition to anomalous transport that may be caused by the development of biofilm. This modeling work is a step forward to devising optimal injection conditions and substrate rates to enhance EIB performance by minimizing the overall supply of electron donor, and thus the cost of the remediation strategy.
Retrieval of Winter Wheat Leaf Area Index from Chinese GF-1 Satellite Data Using the PROSAIL Model.
Li, He; Liu, Gaohuan; Liu, Qingsheng; Chen, Zhongxin; Huang, Chong
2018-04-06
Leaf area index (LAI) is one of the key biophysical parameters in crop structure. The accurate quantitative estimation of crop LAI is essential to verify crop growth and health. The PROSAIL radiative transfer model (RTM) is one of the most established methods for estimating crop LAI. In this study, a look-up table (LUT) based on the PROSAIL RTM was first used to estimate winter wheat LAI from GF-1 data, which accounted for some available prior knowledge relating to the distribution of winter wheat characteristics. Next, the effects of 15 LAI-LUT strategies with reflectance bands and 10 LAI-LUT strategies with vegetation indexes on the accuracy of the winter wheat LAI retrieval with different phenological stages were evaluated against in situ LAI measurements. The results showed that the LUT strategies of LAI-GNDVI were optimal and had the highest accuracy with a root mean squared error (RMSE) value of 0.34, and a coefficient of determination (R²) of 0.61 during the elongation stages, and the LUT strategies of LAI-Green were optimal with a RMSE of 0.74, and R² of 0.20 during the grain-filling stages. The results demonstrated that the PROSAIL RTM had great potential in winter wheat LAI inversion with GF-1 satellite data and the performance could be improved by selecting the appropriate LUT inversion strategies in different growth periods.
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335
Scattina, Alessandro; Mo, Fuhao; Masson, Catherine; Avalle, Massimiliano; Arnoux, Pierre Jean
2018-01-30
This work aims at investigating the influence of some front-end design parameters of a passenger vehicle on the behavior and damage occurring in the human lower limbs when impacted in an accident. The analysis is carried out by means of finite element analysis using a generic car model for the vehicle and the lower limbs model for safety (LLMS) for the purpose of pedestrian safety. Considering the pedestrian standardized impact procedure (as in the 2003/12/EC Directive), a parametric analysis, through a design of experiments plan, was performed. Various material properties, bumper thickness, position of the higher and lower bumper beams, and position of pedestrian, were made variable in order to identify how they influence the injury occurrence. The injury prediction was evaluated from the knee lateral flexion, ligament elongation, and state of stress in the bone structure. The results highlighted that the offset between the higher and lower bumper beams is the most influential parameter affecting the knee ligament response. The influence is smaller or absent considering the other responses and the other considered parameters. The stiffness characteristics of the bumper are, instead, more notable on the tibia. Even if an optimal value of the variables could not be identified trends were detected, with the potential of indicating strategies for improvement. The behavior of a vehicle front end in the impact against a pedestrian can be improved optimizing its design. The work indicates potential strategies for improvement. In this work, each parameter was changed independently one at a time; in future works, the interaction between the design parameters could be also investigated. Moreover, a similar parametric analysis can be carried out using a standard mechanical legform model in order to understand potential diversities or correlations between standard tools and human models.
Clemen, Christof B; Benderoth, Günther E K; Schmidt, Andreas; Hübner, Frank; Vogl, Thomas J; Silber, Gerhard
2017-01-01
In this study, useful methods for active human skeletal muscle material parameter determination are provided. First, a straightforward approach to the implementation of a transversely isotropic hyperelastic continuum mechanical material model in an invariant formulation is presented. This procedure is found to be feasible even if the strain energy is formulated in terms of invariants other than those predetermined by the software's requirements. Next, an appropriate experimental setup for the observation of activation-dependent material behavior, corresponding data acquisition, and evaluation is given. Geometry reconstruction based on magnetic resonance imaging of different deformation states is used to generate realistic, subject-specific finite element models of the upper arm. Using the deterministic SIMPLEX optimization strategy, a convenient quasi-static passive-elastic material characterization is pursued; the results of this approach used to characterize the behavior of human biceps in vivo indicate the feasibility of the illustrated methods to identify active material parameters comprising multiple loading modes. A comparison of a contact simulation incorporating the optimized parameters to a reconstructed deformed geometry of an indented upper arm shows the validity of the obtained results regarding deformation scenarios perpendicular to the effective direction of the nonactivated biceps. However, for a valid, activatable, general-purpose material characterization, the material model needs some modifications as well as a multicriteria optimization of the force-displacement data for different loading modes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
Optimal resource diffusion for suppressing disease spreading in multiplex networks
NASA Astrophysics Data System (ADS)
Chen, Xiaolong; Wang, Wei; Cai, Shimin; Stanley, H. Eugene; Braunstein, Lidia A.
2018-05-01
Resource diffusion is a ubiquitous phenomenon, but how it impacts epidemic spreading has received little study. We propose a model that couples epidemic spreading and resource diffusion in multiplex networks. The spread of disease in a physical contact layer and the recovery of the infected nodes are both strongly dependent upon resources supplied by their counterparts in the social layer. The generation and diffusion of resources in the social layer are in turn strongly dependent upon the state of the nodes in the physical contact layer. Resources diffuse preferentially or randomly in this model. To quantify the degree of preferential diffusion, a bias parameter that controls the resource diffusion is proposed. We conduct extensive simulations and find that the preferential resource diffusion can change phase transition type of the fraction of infected nodes. When the degree of interlayer correlation is below a critical value, increasing the bias parameter changes the phase transition from double continuous to single continuous. When the degree of interlayer correlation is above a critical value, the phase transition changes from multiple continuous to first discontinuous and then to hybrid. We find hysteresis loops in the phase transition. We also find that there is an optimal resource strategy at each fixed degree of interlayer correlation under which the threshold reaches a maximum and the disease can be maximally suppressed. In addition, the optimal controlling parameter increases as the degree of inter-layer correlation increases.
Experimental validation of a new heterogeneous mechanical test design
NASA Astrophysics Data System (ADS)
Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.
2018-05-01
Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.
Modrykamien, Ariel M; Daoud, Yahya
2018-01-01
Optimal mechanical ventilation management in patients with the acute respiratory distress syndrome (ARDS) involves the use of low tidal volumes and limited plateau pressure. Refractory hypoxemia may not respond to this strategy, requiring other interventions. The use of prone positioning in severe ARDS resulted in improvement in 28-day survival. To determine whether mechanical ventilation strategies or other parameters affected survival in patients undergoing prone positioning, a retrospective analysis was conducted of a consecutive series of patients with severe ARDS treated with prone positioning. Demographic and clinical information involving mechanical ventilation strategies, as well as other variables associated with prone positioning, was collected. The rate of in-hospital mortality was obtained, and previously described parameters were compared between survivors and nonsurvivors. Forty-three patients with severe ARDS were treated with prone positioning, and 27 (63%) died in the intensive care unit. Only three parameters were significant predictors of survival: APACHE II score ( P = 0.03), plateau pressure ( P = 0.02), and driving pressure ( P = 0.04). The ability of each of these parameters to predict mortality was assessed with receiver operating characteristic curves. The area under the curve values for APACHE II, plateau pressure, and driving pressure were 0.74, 0.69, and 0.67, respectively. In conclusion, in a group of patients with severe ARDS treated with prone positioning, only APACHE II, plateau pressure, and driving pressure were associated with mortality in the intensive care unit.
Francis, Tittu; Washington, Travis; Srivastava, Karan; Moutzouros, Vasilios; Makhni, Eric C; Hakeos, William
2017-11-01
Tension band wiring (TBW) and locked plating are common treatment options for Mayo IIA olecranon fractures. Clinical trials have shown excellent functional outcomes with both techniques. Although TBW implants are significantly less expensive than a locked olecranon plate, TBW often requires an additional operation for implant removal. To choose the most cost-effective treatment strategy, surgeons must understand how implant costs and return to the operating room influence the most cost-effective strategy. This cost-effective analysis study explored the optimal treatment strategies by using decision analysis tools. An expected-value decision tree was constructed to estimate costs based on the 2 implant choices. Values for critical variables, such as implant removal rate, were obtained from the literature. A Monte Carlo simulation consisting of 100,000 trials was used to incorporate variability in medical costs and implant removal rates. Sensitivity analysis and strategy tables were used to show how different variables influence the most cost-effective strategy. TBW was the most cost-effective strategy, with a cost savings of approximately $1300. TBW was also the dominant strategy by being the most cost-effective solution in 63% of the Monte Carlo trials. Sensitivity analysis identified implant costs for plate fixation and surgical costs for implant removal as the most sensitive parameters influencing the cost-effective strategy. Strategy tables showed the most cost-effective solution as 2 parameters vary simultaneously. TBW is the most cost-effective strategy in treating Mayo IIA olecranon fractures despite a higher rate of return to the operating room. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Remund, Stefan M.; Jaeggi, Beat; Kramer, Thorsten; Neuenschwander, Beat
2017-03-01
The resulting surface roughness and waviness after processing with ultra-short pulsed laser radiation depend on the laser parameters as well as on the machining strategy and the scanning system. However the results depend on the material and its initial surface quality and finishing as well. The improvement of surface finishing represents effort and produces additional costs. For industrial applications it is important to reduce the preparation of a workpiece for laser micro-machining to optimize quality and reduce costs. The effects of the ablation process and the influence of the machining strategy and scanning system onto the surface roughness and waviness can be differenced due to their separate manner. By using the optimal laser parameters on an initially perfect surface, the ablation process mainly increases the roughness to a certain value for most metallic materials. However, imperfections in the scanning system causing a slight variation in the scanning speed lead to a raise of the waviness on the sample surface. For a basic understanding of the influence of grinding marks, the sample surfaces were initially furnished with regular grooves of different depths and spatial frequencies to gain a homogenous and well-defined original surface. On these surfaces the effect of different beam waists and machining strategy are investigated and the results are compared with a simulation of the process. Furthermore the behaviors of common surface finishes used in industrial applications for laser micro-machining are studied and the relation onto the resulting surface roughness and waviness is presented.
NASA Astrophysics Data System (ADS)
Quan, Ji; Yang, Xiukang; Wang, Xianjia
2018-07-01
How cooperative behavior emerges and evolves in human society remains a puzzle. It has been observed that the sense of guilt rooted from free-riding and the sense of justice for punishing the free-riders are prevalent in the real world. Inspired by this observation, two punishment mechanisms have been introduced in the spatial public goods game which are called self-punishment and peer punishment respectively in this paper. In each situation, we have introduced a corresponding parameter to describe the level of individual tolerance or social tolerance. For each individual, whether to punish others or whether it will be punished by others depends on the corresponding tolerance parameter. We focus on the effects of the two kinds of tolerance parameters on the cooperation of the population. The particle swarm optimization (PSO)-based learning rule is used to describe the strategy updating process of individuals. We consider both of the memory and the imitation in our model. Via simulation experiments, we find that both of the two punishment mechanisms could facilitate the promotion of cooperation to a large extent. For the self-punishment and for most parameters in the peer punishment, the smaller the tolerance parameter, the more conducive it is to promote cooperation. These results can help us to better understand the prevailing phenomenon of cooperation in the real world.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
Robust input design for nonlinear dynamic modeling of AUV.
Nouri, Nowrouz Mohammad; Valadi, Mehrdad
2017-09-01
Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An EGO-like optimization framework for sensor placement optimization in modal analysis
NASA Astrophysics Data System (ADS)
Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel
2018-07-01
In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.
NASA Astrophysics Data System (ADS)
Chamindu Deepagoda, T. K. K.; Chen Lopez, Jose Choc; Møldrup, Per; de Jonge, Lis Wollesen; Tuller, Markus
2013-10-01
Over the last decade there has been a significant shift in global agricultural practice. Because the rapid increase of human population poses unprecedented challenges to production of an adequate and economically feasible food supply for undernourished populations, soilless greenhouse production systems are regaining increased worldwide attention. The optimal control of water availability and aeration is an essential prerequisite to successfully operate plant growth systems with soilless substrates such as aggregated foamed glass, perlite, rockwool, coconut coir, or mixtures thereof. While there are considerable empirical and theoretical efforts devoted to characterize water retention and aeration substrate properties, a holistic, physically-based approach considering water retention and aeration concurrently is lacking. In this study, the previously developed concept of integral water storage and energy was expanded to dual-porosity substrates and an analog integral oxygen diffusivity parameter was introduced to simultaneously characterize aeration properties of four common soilless greenhouse growth media. Integral parameters were derived for greenhouse crops in general, as well as for tomatoes. The integral approach provided important insights for irrigation management and for potential optimization of substrate properties. Furthermore, an observed relationship between the integral parameters for water availability and oxygen diffusivity can be potentially applied for the design of advanced irrigation and management strategies to ensure stress-free growth conditions, while conserving water resources.
Optimism, coping and long-term recovery from coronary artery surgery in women.
King, K B; Rowe, M A; Kimble, L P; Zerwic, J J
1998-02-01
Optimism, coping strategies, and psychological and functional outcomes were measured in 55 women undergoing coronary artery surgery. Data were collected in-hospital and at 1, 6, and 12 months after surgery. Optimism was related to positive moods and life satisfaction, and inversely related to negative moods. Few relationships were found between optimism and functional ability. Cognitive coping strategies accounted for a mediating effect between optimism and negative mood. Optimists were more likely to accept their situation, and less likely to use escapism. In turn, these coping strategies were inversely related to negative mood and mediated the relationship between optimism and this outcome. Optimism was not related to problem-focused coping strategies; this, these coping strategies cannot explain the relationship between optimism and outcomes.
Muthukkumaran, A; Aravamudan, K
2017-12-15
Adsorption, a popular technique for removing azo dyes from aqueous streams, is influenced by several factors such as pH, initial dye concentration, temperature and adsorbent dosage. Any strategy that seeks to identify optimal conditions involving these factors, should take into account both kinetic and equilibrium aspects since they influence rate and extent of removal by adsorption. Hence rigorous kinetics and accurate equilibrium models are required. In this work, the experimental investigations pertaining to adsorption of acid orange 10 dye (AO10) on activated carbon were carried out using Central Composite Design (CCD) strategy. The significant factors that affected adsorption were identified to be solution temperature, solution pH, adsorbent dosage and initial solution concentration. Thermodynamic analysis showed the endothermic nature of the dye adsorption process. The kinetics of adsorption has been rigorously modeled using the Homogeneous Surface Diffusion Model (HSDM) after incorporating the non-linear Freundlich adsorption isotherm. Optimization was performed for kinetic parameters (color removal time and surface diffusion coefficient) as well as the equilibrium affected response viz. percentage removal. Finally, the optimum conditions predicted were experimentally validated. Copyright © 2017 Elsevier Ltd. All rights reserved.
GALAXY: A new hybrid MOEA for the optimal design of Water Distribution Systems
NASA Astrophysics Data System (ADS)
Wang, Q.; Savić, D. A.; Kapelan, Z.
2017-03-01
A new hybrid optimizer, called genetically adaptive leaping algorithm for approximation and diversity (GALAXY), is proposed for dealing with the discrete, combinatorial, multiobjective design of Water Distribution Systems (WDSs), which is NP-hard and computationally intensive. The merit of GALAXY is its ability to alleviate to a great extent the parameterization issue and the high computational overhead. It follows the generational framework of Multiobjective Evolutionary Algorithms (MOEAs) and includes six search operators and several important strategies. These operators are selected based on their leaping ability in the objective space from the global and local search perspectives. These strategies steer the optimization and balance the exploration and exploitation aspects simultaneously. A highlighted feature of GALAXY lies in the fact that it eliminates majority of parameters, thus being robust and easy-to-use. The comparative studies between GALAXY and three representative MOEAs on five benchmark WDS design problems confirm its competitiveness. GALAXY can identify better converged and distributed boundary solutions efficiently and consistently, indicating a much more balanced capability between the global and local search. Moreover, its advantages over other MOEAs become more substantial as the complexity of the design problem increases.
Design and control of a novel two-speed Uninterrupted Mechanical Transmission for electric vehicles
NASA Astrophysics Data System (ADS)
Fang, Shengnan; Song, Jian; Song, Haijun; Tai, Yuzhuo; Li, Fei; Sinh Nguyen, Truong
2016-06-01
Conventional all-electric vehicles (EV) adopt single-speed transmission due to its low cost and simple construction. However, with the adoption of this type of driveline system, development of EV technology leads to the growing performance requirements of drive motor. Introducing a multi-speed or two-speed transmission to EV offers the possibility of efficiency improvement of the whole powertrain. This paper presents an innovative two-speed Uninterrupted Mechanical Transmission (UMT), which consists of an epicyclic gearing system, a centrifugal clutch and a brake band, allowing the seamless shifting between two gears. Besides, driver's intention is recognized by the control system which is based on fuzzy logic controller (FLC), utilizing the signals of vehicle velocity and accelerator pedal position. The novel UMT shows better dynamic and comfort performance in compare with the optimized AMT with the same gear ratios. Comparison between the control strategy with recognition of driver intention and the conventional two-parameter gear shifting strategy is presented. And the simulation and analysis of the middle layer of optimal gearshift control algorithm is detailed. The results indicate that the UMT adopting FLC and optimal control method provides a significant improvement of energy efficiency, dynamic performance and shifting comfort for EV.
Mammalian cell culture monitoring using in situ spectroscopy: Is your method really optimised?
André, Silvère; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Duponchel, Ludovic
2017-03-01
In recent years, as a result of the process analytical technology initiative of the US Food and Drug Administration, many different works have been carried out on direct and in situ monitoring of critical parameters for mammalian cell cultures by Raman spectroscopy and multivariate regression techniques. However, despite interesting results, it cannot be said that the proposed monitoring strategies, which will reduce errors of the regression models and thus confidence limits of the predictions, are really optimized. Hence, the aim of this article is to optimize some critical steps of spectroscopic acquisition and data treatment in order to reach a higher level of accuracy and robustness of bioprocess monitoring. In this way, we propose first an original strategy to assess the most suited Raman acquisition time for the processes involved. In a second part, we demonstrate the importance of the interbatch variability on the accuracy of the predictive models with a particular focus on the optical probes adjustment. Finally, we propose a methodology for the optimization of the spectral variables selection in order to decrease prediction errors of multivariate regressions. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:308-316, 2017. © 2017 American Institute of Chemical Engineers.
Optimal control of hydroelectric facilities
NASA Astrophysics Data System (ADS)
Zhao, Guangzhi
This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the challenging problem of optimizing a sequence of two hydro dams sharing the same river system. The complexity of this problem is magnified and we just scratch its surface here. The thesis concludes with suggestions for future work in this fertile area. Keywords: dynamic programming, hydroelectric facility, optimization, optimal control, switching cost, turbine efficiency.
Elucidating Performance Limitations in Alkaline-Exchange- Membrane Fuel Cells
Shiau, Huai-Suen; Zenyuk, Iryna V.; Weber, Adam Z.
2017-07-15
Water management is a serious concern for alkaline-exchange-membrane fuel cells (AEMFCs) because water is a reactant in the alkaline oxygen-reduction reaction and hydroxide conduction in alkaline-exchange membranes is highly hydration dependent. Here in this article, we develop and use a multiphysics, multiphase model to explore water management in AEMFCs. We demonstrate that the low performance is mostly caused by extremely non-uniform distribution of water in the ionomer phase. A sensitivity analysis of design parameters including humidification strategies, membrane properties, and water transport resistance was undertaken to explore possible optimization strategies. Furthermore, the strategy and issues of reducing bicarbonate/carbonate buildup inmore » the membrane-electrode assembly with CO 2 from air is demonstrated based on the model prediction. Overall, mathematical modeling is used to explore trends and strategies to overcome performance bottlenecks and help enable AEMFC commercialization.« less
Biomaterial-mediated strategies targeting vascularization for bone repair.
García, José R; García, Andrés J
2016-04-01
Repair of non-healing bone defects through tissue engineering strategies remains a challenging feat in the clinic due to the aversive microenvironment surrounding the injured tissue. The vascular damage that occurs following a bone injury causes extreme ischemia and a loss of circulating cells that contribute to regeneration. Tissue-engineered constructs aimed at regenerating the injured bone suffer from complications based on the slow progression of endogenous vascular repair and often fail at bridging the bone defect. To that end, various strategies have been explored to increase blood vessel regeneration within defects to facilitate both tissue-engineered and natural repair processes. Developments that induce robust vascularization will need to consolidate various parameters including optimization of embedded therapeutics, scaffold characteristics, and successful integration between the construct and the biological tissue. This review provides an overview of current strategies as well as new developments in engineering biomaterials to induce reparation of a functional vascular supply in the context of bone repair.
Optimal control strategy for a novel computer virus propagation model on scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Chunming; Huang, Haitao
2016-06-01
This paper aims to study the combined impact of reinstalling system and network topology on the spread of computer viruses over the Internet. Based on scale-free network, this paper proposes a novel computer viruses propagation model-SLBOSmodel. A systematic analysis of this new model shows that the virus-free equilibrium is globally asymptotically stable when its spreading threshold is less than one; nevertheless, it is proved that the viral equilibrium is permanent if the spreading threshold is greater than one. Then, the impacts of different model parameters on spreading threshold are analyzed. Next, an optimally controlled SLBOS epidemic model on complex networks is also studied. We prove that there is an optimal control existing for the control problem. Some numerical simulations are finally given to illustrate the main results.
NASA Astrophysics Data System (ADS)
Taleb, M.; Cherkaoui, M.; Hbib, M.
2018-05-01
Recently, renewable energy sources are impacting seriously power quality of the grids in term of frequency and voltage stability, due to their intermittence and less forecasting accuracy. Among these sources, wind energy conversion systems (WECS) received a great interest and especially the configuration with Doubly Fed Induction Generator. However, WECS strongly nonlinear, are making their control not easy by classical approaches such as a PI. In this paper, we continue deepen study of PI controller used in active and reactive power control of this kind of WECS. Particle Swarm Optimization (PSO) is suggested to improve its dynamic performances and its robustness against parameters variations. This work highlights the performances of PSO optimized PI control against classical PI tuned with poles compensation strategy. Simulations are carried out on MATLAB-SIMULINK software.
Inventory-transportation integrated optimization for maintenance spare parts of high-speed trains
Wang, Jiaxi; Wang, Huasheng; Wang, Zhongkai; Li, Jian; Lin, Ruixi; Xiao, Jie; Wu, Jianping
2017-01-01
This paper presents a 0–1 programming model aimed at obtaining the optimal inventory policy and transportation mode for maintenance spare parts of high-speed trains. To obtain the model parameters for occasionally-replaced spare parts, a demand estimation method based on the maintenance strategies of China’s high-speed railway system is proposed. In addition, we analyse the shortage time using PERT, and then calculate the unit time shortage cost from the viewpoint of train operation revenue. Finally, a real-world case study from Shanghai Depot is conducted to demonstrate our method. Computational results offer an effective and efficient decision support for inventory managers. PMID:28472097
A hybrid linear/nonlinear training algorithm for feedforward neural networks.
McLoone, S; Brown, M D; Irwin, G; Lightbody, A
1998-01-01
This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.
Yuan, Wenjia; Shen, Weidong; Zhang, Yueguang; Liu, Xu
2014-05-05
Dielectric multilayer beam splitter with differential phase shift on transmission and reflection for division-of-amplitude photopolarimeter (DOAP) was presented for the first time to our knowledge. The optimal parameters for the beam splitter are Tp = 78.9%, Ts = 21.1% and Δr - Δt = π/2 at 532nm at an angle of incidence of 45°. Multilayer anti-reflection coating with low phase shift was applied to reduce the backside reflection. Different design strategies that can achieve all optimal targets at the wavelength were tested. Two design methods were presented to optimize the differential phase shift. The samples were prepared by ion beam sputtering (IBS). The experimental results show good agreement with those of the design. The ellipsometric parameters of samples were measured in reflection (ψr, Δr) = (26.5°, 135.1°) and (28.2°, 133.5°), as well as in transmission (ψt, Δt) = (62.5°, 46.1°) and (63.5°, 46°) at 532.6nm. The normalized determinant of instrument matrix to evaluate the performance of samples is respectively 0.998 and 0.991 at 532.6nm.
Development of a pump-turbine runner based on multiobjective optimization
NASA Astrophysics Data System (ADS)
Xuhe, W.; Baoshan, Z.; Lei, T.; Jie, Z.; Shuliang, C.
2014-03-01
As a key component of reversible pump-turbine unit, pump-turbine runner rotates at pump or turbine direction according to the demand of power grid, so higher efficiencies under both operating modes have great importance for energy saving. In the present paper, a multiobjective optimization design strategy, which includes 3D inverse design method, CFD calculations, response surface method (RSM) and multiobjective genetic algorithm (MOGA), is introduced to develop a model pump-turbine runner for middle-high head pumped storage plant. Parameters that controlling blade shape, such as blade loading and blade lean angle at high pressure side are chosen as input parameters, while runner efficiencies under both pump and turbine modes are selected as objective functions. In order to validate the availability of the optimization design system, one runner configuration from Pareto front is manufactured for experimental research. Test results show that the highest unit efficiency is 91.0% under turbine mode and 90.8% under pump mode for the designed runner, of which prototype efficiencies are 93.88% and 93.27% respectively. Viscous CFD calculations for full passage model are also conducted, which aim at finding out the hydraulic improvement from internal flow analyses.
Study of the Polarization Strategy for Electron Cyclotron Heating Systems on HL-2M
NASA Astrophysics Data System (ADS)
Zhang, F.; Huang, M.; Xia, D. H.; Song, S. D.; Wang, J. Q.; Huang, B.; Wang, H.
2016-06-01
As important components integrated in transmission lines of electron cyclotron heating systems, polarizers are mainly used to obtain the desired polarization for highly efficient coupling between electron cyclotron waves and plasma. The polarization strategy for 105-GHz electron cyclotron heating systems of HL-2M tokamak is studied in this paper. Considering the polarizers need high efficiency, stability, and low loss to realize any polarization states, two sinusoidal-grooved polarizers, which include a linear polarizer and an elliptical polarizer, are designed with the coordinate transformation method. The parameters, the period p and the depth d, of two sinusoidal-grooved polarizers are optimized by a phase difference analysis method to achieve an almost arbitrary polarization. Finally, the optimized polarizers are manufactured and their polarization characteristics are tested with a low-power test platform. The experimental results agree well with the numerical calculations, indicating that the designed polarizers can meet the polarization requirements of the electron cyclotron heating systems of HL-2M tokamak.
NASA Astrophysics Data System (ADS)
Singh, Ranjan Kumar; Rinawa, Moti Lal
2018-04-01
The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.
Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles
NASA Astrophysics Data System (ADS)
Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.
2016-03-01
A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.
NASA Astrophysics Data System (ADS)
Sivandran, Gajan; Bras, Rafael L.
2012-12-01
In semiarid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Vegetation roots have strong control over this partitioning, and assuming a static root profile, predetermine the manner in which this partitioning is undertaken.A coupled, dynamic vegetation and hydrologic model, tRIBS + VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point-scale simulations were carried out using two spatially and temporally invariant rooting schemes: uniform: a one-parameter model and logistic: a two-parameter model. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semiarid Walnut Gulch Experimental Watershed (WGEW) in Arizona. A series of simulations were undertaken exploring the parameter space of both rooting schemes and the optimal root distribution for the simulation, which was defined as the root distribution with the maximum mean transpiration over a 100-yr period, and this was identified. This optimal root profile was determined for five generic soil textures and two plant-functional types (PFTs) to illustrate the role of soil texture on the partitioning of moisture at the land surface. The simulation results illustrate the strong control soil texture has on the partitioning of rainfall and consequently the depth of the optimal rooting profile. High-conductivity soils resulted in the deepest optimal rooting profile with land surface moisture fluxes dominated by transpiration. As we move toward the lower conductivity end of the soil spectrum, a shallowing of the optimal rooting profile is observed and evaporation gradually becomes the dominate flux from the land surface. This study offers a methodology through which local plant, soil, and climate can be accounted for in the parameterization of rooting profiles in semiarid regions.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
NASA Astrophysics Data System (ADS)
Wang, Geng; Zhou, Kexin; Zhang, Yeming
2018-04-01
The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.
On polynomial preconditioning for indefinite Hermitian matrices
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1989-01-01
The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.
Optimal Resting-Growth Strategies of Microbial Populations in Fluctuating Environments
Geisel, Nico; Vilar, Jose M. G.; Rubi, J. Miguel
2011-01-01
Bacteria spend most of their lifetime in non-growing states which allow them to survive extended periods of stress and starvation. When environments improve, they must quickly resume growth to maximize their share of limited nutrients. Cells with higher stress resistance often survive longer stress durations at the cost of needing more time to resume growth, a strong disadvantage in competitive environments. Here we analyze the basis of optimal strategies that microorganisms can use to cope with this tradeoff. We explicitly show that the prototypical inverse relation between stress resistance and growth rate can explain much of the different types of behavior observed in stressed microbial populations. Using analytical mathematical methods, we determine the environmental parameters that decide whether cells should remain vegetative upon stress exposure, downregulate their metabolism to an intermediate optimum level, or become dormant. We find that cell-cell variability, or intercellular noise, is consistently beneficial in the presence of extreme environmental fluctuations, and that it provides an efficient population-level mechanism for adaption in a deteriorating environment. Our results reveal key novel aspects of responsive phenotype switching and its role as an adaptive strategy in changing environments. PMID:21525975
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Optimization of the genotyping-by-sequencing strategy for population genomic analysis in conifers.
Pan, Jin; Wang, Baosheng; Pei, Zhi-Yong; Zhao, Wei; Gao, Jie; Mao, Jian-Feng; Wang, Xiao-Ru
2015-07-01
Flexibility and low cost make genotyping-by-sequencing (GBS) an ideal tool for population genomic studies of nonmodel species. However, to utilize the potential of the method fully, many parameters affecting library quality and single nucleotide polymorphism (SNP) discovery require optimization, especially for conifer genomes with a high repetitive DNA content. In this study, we explored strategies for effective GBS analysis in pine species. We constructed GBS libraries using HpaII, PstI and EcoRI-MseI digestions with different multiplexing levels and examined the effect of restriction enzymes on library complexity and the impact of sequencing depth and size selection of restriction fragments on sequence coverage bias. We tested and compared UNEAK, Stacks and GATK pipelines for the GBS data, and then developed a reference-free SNP calling strategy for haploid pine genomes. Our GBS procedure proved to be effective in SNP discovery, producing 7000-11 000 and 14 751 SNPs within and among three pine species, respectively, from a PstI library. This investigation provides guidance for the design and analysis of GBS experiments, particularly for organisms for which genomic information is lacking. © 2014 John Wiley & Sons Ltd.
Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan
2011-12-01
Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.
Algorithmic Mechanism Design of Evolutionary Computation
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Behavior learning in differential games and reorientation maneuvers
NASA Astrophysics Data System (ADS)
Satak, Neha
The purpose of this dissertation is to apply behavior learning concepts to incomplete- information continuous time games. Realistic game scenarios are often incomplete-information games in which the players withhold information. A player may not know its opponent's objectives and strategies prior to the start of the game. This lack of information can limit the player's ability to play optimally. If the player can observe the opponent's actions, it can better optimize its achievements by taking corrective actions. In this research, a framework to learn an opponent's behavior and take corrective actions is developed. The framework will allow a player to observe the opponent's actions and formulate behavior models. The developed behavior model can then be utilized to find the best actions for the player that optimizes the player's objective function. In addition, the framework proposes that the player plays a safe strategy at the beginning of the game. A safe strategy is defined in this research as a strategy that guarantees a minimum pay-off to the player independent of the other player's actions. During the initial part of the game, the player will play the safe strategy until it learns the opponent's behavior. Two methods to develop behavior models that differ in the formulation of the behavior model are proposed. The first method is the Cost-Strategy Recognition (CSR) method in which the player formulates an objective function and a strategy for the opponent. The opponent is presumed to be rational and therefore will play to optimize its objective function. The strategy of the opponent is dependent on the information available to the opponent about other players in the game. A strategy formulation presumes a certain level of information available to the opponent. The previous observations of the opponent's actions are used to estimate the parameters of the formulated behavior model. The estimated behavior model predicts the opponent's future actions. The second method is the Direct Approximation of Value Function (DAVF) method. In this method, unlike the CSR method, the player formulates an objective function for the opponent but does not formulates a strategy directly; rather, indirectly the player assumes that the opponent is playing optimally. Thus, a value function satisfying the HJB equation corresponding to the opponent's cost function exists. The DAVF method finds an approximate solution for the value function based on previous observations of the opponent's control. The approximate solution to the value function is then used to predict the opponent's future behavior. Game examples in which only a single player is learning its opponent's behavior are simulated. Subsequently, examples in which both players in a two-player game are learning each other's behavior are simulated. In the second part of this research, a reorientation control maneuver for a spinning spacecraft will be developed. This will aid the application of behavior learning and differential games concepts to the specific scenario involving multiple spinning spacecraft. An impulsive reorientation maneuver with coasting will be analytically designed to reorient the spin axis of the spacecraft using a single body fixed thruster. Cooperative maneuvers of multiple spacecraft optimizing fuel and relative orientation will be designed. Pareto optimality concepts will be used to arrive at mutually agreeable reorientation maneuvers for the cooperating spinning spacecraft.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
NASA Astrophysics Data System (ADS)
Sturdevant-Rees, P. L.; Bourdeau, D.; Baker, R.; Long, S. C.; Barten, P. K.
2004-05-01
Microbial and water-quality measurements are collected during storm events under a variety of meteorological and land-use conditions in order to 1) identify risk of Cryptosporidium oocysts, Giardia cysts and other constituents, including microbial indicator organisms, entering surface waters from various land uses during periods of surface runoff; 2) optimize storm sampling procedures for these parameters; and 3) optimize strategies for accurate determination of constituent loads. The investigation is focused on four isolated land uses: forested with free ranging wildlife, beaver influenced forested with free ranging wildlife, residential/commercial, and dairy farm grazing/pastureland using an upstream and downstream sampling strategy. Traditional water-quality analyses include pH, temperature, turbidity, conductivity, total suspended solids, total phosphorus, total Kjeldahl-nitrogen, and ammonia nitrogen, Giardia cysts and Cryptosporidium oocysts. Total coliforms and fecal coliforms are measured as industry standard microbial analyses. Sorbitol-fermenting Bifidobacteria, Rhodococcus coprophilus, Clostridium perfringens spores, and Somatic and F-specific coliphages are measured at select sites as potential alternative source-specific indicator organisms. Upon completion of the project, the final database will consist of wet weather transport data for a set of parameters during twenty-four distinct storm-events in addition to monthly baseline data. A subset of the results to date will be presented, with focus placed on demonstrating the impact of beaver on constituent loadings over a variety of hydrologic and meteorological conditions.
A Fuzzy Approach of the Competition on the Air Transport Market
NASA Technical Reports Server (NTRS)
Charfeddine, Souhir; DeColigny, Marc; Camino, Felix Mora; Cosenza, Carlos Alberto Nunes
2003-01-01
The aim of this communication is to study with a new scope the conditions of the equilibrium in an air transport market where two competitive airlines are operating. Each airline is supposed to adopt a strategy maximizing its profit while its estimation of the demand has a fuzzy nature. This leads each company to optimize a program of its proposed services (frequency of the flights and ticket prices) characterized by some fuzzy parameters. The case of monopoly is being taken as a benchmark. Classical convex optimization can be used to solve this decision problem. This approach provides the airline with a new decision tool where uncertainty can be taken into account explicitly. The confrontation of the strategies of the companies, in the ease of duopoly, leads to the definition of a fuzzy equilibrium. This concept of fuzzy equilibrium is more general and can be applied to several other domains. The formulation of the optimization problem and the methodological consideration adopted for its resolution are presented in their general theoretical aspect. In the case of air transportation, where the conditions of management of operations are critical, this approach should offer to the manager elements needed to the consolidation of its decisions depending on the circumstances (ordinary, exceptional events,..) and to be prepared to face all possibilities. Keywords: air transportation, competition equilibrium, convex optimization , fuzzy modeling,
Chowdhary, A G; Challis, J H
2001-07-07
A series of overarm throws, constrained to the parasagittal plane, were simulated using a muscle model actuated two-segment model representing the forearm and hand plus projectile. The parameters defining the modeled muscles and the anthropometry of the two-segment models were specific to the two young male subjects. All simulations commenced from a position of full elbow flexion and full wrist extension. The study was designed to elucidate the optimal inter-muscular coordination strategies for throwing projectiles to achieve maximum range, as well as maximum projectile kinetic energy for a variety of projectile masses. A proximal to distal (PD) sequence of muscle activations was seen in many of the simulated throws but not all. Under certain conditions moment reversal produced a longer throw and greater projectile energy, and deactivation of the muscles resulted in increased projectile energy. Therefore, simple timing of muscle activation does not fully describe the patterns of muscle recruitment which can produce optimal throws. The models of the two subjects required different timings of muscle activations, and for some of the tasks used different coordination patterns. Optimal strategies were found to vary with the mass of the projectile, the anthropometry and the muscle characteristics of the subjects modeled. The tasks examined were relatively simple, but basic rules for coordinating these tasks were not evident. Copyright 2001 Academic Press.
Using Chemical Reaction Kinetics to Predict Optimal Antibiotic Treatment Strategies.
Abel Zur Wiesch, Pia; Clarelli, Fabrizio; Cohen, Ted
2017-01-01
Identifying optimal dosing of antibiotics has proven challenging-some antibiotics are most effective when they are administered periodically at high doses, while others work best when minimizing concentration fluctuations. Mechanistic explanations for why antibiotics differ in their optimal dosing are lacking, limiting our ability to predict optimal therapy and leading to long and costly experiments. We use mathematical models that describe both bacterial growth and intracellular antibiotic-target binding to investigate the effects of fluctuating antibiotic concentrations on individual bacterial cells and bacterial populations. We show that physicochemical parameters, e.g. the rate of drug transmembrane diffusion and the antibiotic-target complex half-life are sufficient to explain which treatment strategy is most effective. If the drug-target complex dissociates rapidly, the antibiotic must be kept constantly at a concentration that prevents bacterial replication. If antibiotics cross bacterial cell envelopes slowly to reach their target, there is a delay in the onset of action that may be reduced by increasing initial antibiotic concentration. Finally, slow drug-target dissociation and slow diffusion out of cells act to prolong antibiotic effects, thereby allowing for less frequent dosing. Our model can be used as a tool in the rational design of treatment for bacterial infections. It is easily adaptable to other biological systems, e.g. HIV, malaria and cancer, where the effects of physiological fluctuations of drug concentration are also poorly understood.
NASA Astrophysics Data System (ADS)
Schöttl, Peter; Bern, Gregor; van Rooyen, De Wet; Heimsath, Anna; Fluri, Thomas; Nitz, Peter
2017-06-01
A transient simulation methodology for cavity receivers for Solar Tower Central Receiver Systems with molten salt as heat transfer fluid is described. Absorbed solar radiation is modeled with ray tracing and a sky discretization approach to reduce computational effort. Solar radiation re-distribution in the cavity as well as thermal radiation exchange are modeled based on view factors, which are also calculated with ray tracing. An analytical approach is used to represent convective heat transfer in the cavity. Heat transfer fluid flow is simulated with a discrete tube model, where the boundary conditions at the outer tube surface mainly depend on inputs from the previously mentioned modeling aspects. A specific focus is put on the integration of optical and thermo-hydraulic models. Furthermore, aiming point and control strategies are described, which are used during the transient performance assessment. Eventually, the developed simulation methodology is used for the optimization of the aperture opening size of a PS10-like reference scenario with cavity receiver and heliostat field. The objective function is based on the cumulative gain of one representative day. Results include optimized aperture opening size, transient receiver characteristics and benefits of the implemented aiming point strategy compared to a single aiming point approach. Future work will include annual simulations, cost assessment and optimization of a larger range of receiver parameters.
Optimal Reservoir Operation using Stochastic Model Predictive Control
NASA Astrophysics Data System (ADS)
Sahu, R.; McLaughlin, D.
2016-12-01
Hydropower operations are typically designed to fulfill contracts negotiated with consumers who need reliable energy supplies, despite uncertainties in reservoir inflows. In addition to providing reliable power the reservoir operator needs to take into account environmental factors such as downstream flooding or compliance with minimum flow requirements. From a dynamical systems perspective, the reservoir operating strategy must cope with conflicting objectives in the presence of random disturbances. In order to achieve optimal performance, the reservoir system needs to continually adapt to disturbances in real time. Model Predictive Control (MPC) is a real-time control technique that adapts by deriving the reservoir release at each decision time from the current state of the system. Here an ensemble-based version of MPC (SMPC) is applied to a generic reservoir to determine both the optimal power contract, considering future inflow uncertainty, and a real-time operating strategy that attempts to satisfy the contract. Contract selection and real-time operation are coupled in an optimization framework that also defines a Pareto trade off between the revenue generated from energy production and the environmental damage resulting from uncontrolled reservoir spills. Further insight is provided by a sensitivity analysis of key parameters specified in the SMPC technique. The results demonstrate that SMPC is suitable for multi-objective planning and associated real-time operation of a wide range of hydropower reservoir systems.
Using Chemical Reaction Kinetics to Predict Optimal Antibiotic Treatment Strategies
Abel zur Wiesch, Pia; Cohen, Ted
2017-01-01
Identifying optimal dosing of antibiotics has proven challenging—some antibiotics are most effective when they are administered periodically at high doses, while others work best when minimizing concentration fluctuations. Mechanistic explanations for why antibiotics differ in their optimal dosing are lacking, limiting our ability to predict optimal therapy and leading to long and costly experiments. We use mathematical models that describe both bacterial growth and intracellular antibiotic-target binding to investigate the effects of fluctuating antibiotic concentrations on individual bacterial cells and bacterial populations. We show that physicochemical parameters, e.g. the rate of drug transmembrane diffusion and the antibiotic-target complex half-life are sufficient to explain which treatment strategy is most effective. If the drug-target complex dissociates rapidly, the antibiotic must be kept constantly at a concentration that prevents bacterial replication. If antibiotics cross bacterial cell envelopes slowly to reach their target, there is a delay in the onset of action that may be reduced by increasing initial antibiotic concentration. Finally, slow drug-target dissociation and slow diffusion out of cells act to prolong antibiotic effects, thereby allowing for less frequent dosing. Our model can be used as a tool in the rational design of treatment for bacterial infections. It is easily adaptable to other biological systems, e.g. HIV, malaria and cancer, where the effects of physiological fluctuations of drug concentration are also poorly understood. PMID:28060813
Game-theoretic perspective of Ping-Pong protocol
NASA Astrophysics Data System (ADS)
Kaur, Hargeet; Kumar, Atul
2018-01-01
We analyse Ping-Pong protocol from the point of view of a game. The analysis helps us in understanding the different strategies of a sender and an eavesdropper to gain the maximum payoff in the game. The study presented here characterizes strategies that lead to different Nash equilibriums. We further demonstrate the condition for Pareto optimality depending on the parameters used in the game. Moreover, we also analysed LM05 protocol and compared it with PP protocol from the point of view of a generic two-way QKD game with or without entanglement. Our results provide a deeper understanding of general two-way QKD protocols in terms of the security and payoffs of different stakeholders in the protocol.
Farré, Maria José; Döderer, Katrin; Hearn, Laurence; Poussade, Yvan; Keller, Jurg; Gernjak, Wolfgang
2011-01-30
N-nitrosodimethylamine (NDMA) can be formed when secondary effluents are disinfected by chloramines. By means of bench scale experiments this paper investigates operational parameters than can help Advanced Water Treatment Plants (AWTPs) to reduce the formation of NDMA during the production of high quality recycled water. The formation of NDMA was monitored during a contact time of 24h using dimethylamine as NDMA model precursor and secondary effluent from wastewater treatment plants. The three chloramine disinfection strategies tested were pre-formed and in-line formed monochloramine, and pre-formed dichloramine. Although the latter is not employed on purpose in full-scale applications, it has been suggested as the main contributing chemical generating NDMA during chloramination. After 24h, the NDMA formation decreased in both matrices tested in the order: pre-formed dichloramine>in-line formed monochloramine≫pre-formed monochloramine. The most important parameter to consider for the inhibition of NDMA formation was the length of contact time between disinfectant and wastewater. Formation of NDMA was initially inhibited for up to 6h with concentrations consistently <10 ng/L during these early stages of disinfection, regardless of the disinfection strategy. The reduction of the contact time was implemented in Bundamba AWTP (Queensland, Australia), where NDMA concentrations were reduced by a factor of 20 by optimizing the disinfection strategy. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bhattacharjya, Rajib Kumar
2018-05-01
The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.
Oxide vapor distribution from a high-frequency sweep e-beam system
NASA Astrophysics Data System (ADS)
Chow, R.; Tassano, P. L.; Tsujimoto, N.
1995-03-01
Oxide vapor distributions have been determined as a function of operating parameters of a high frequency sweep e-beam source combined with a programmable sweep controller. We will show which parameters are significant, the parameters that yield the broadest oxide deposition distribution, and the procedure used to arrive at these conclusions. A design-of-experimental strategy was used with five operating parameters: evaporation rate, sweep speed, sweep pattern (pre-programmed), phase speed (azimuthal rotation of the pattern), profile (dwell time as a function of radial position). A design was chosen that would show which of the parameters and parameter pairs have a statistically significant effect on the vapor distribution. Witness flats were placed symmetrically across a 25 inches diameter platen. The stationary platen was centered 24 inches above the e-gun crucible. An oxide material was evaporated under 27 different conditions. Thickness measurements were made with a stylus profilometer. The information will enable users of the high frequency e-gun systems to optimally locate the source in a vacuum system and understand which parameters have a major effect on the vapor distribution.
An Optimal Current Observer for Predictive Current Controlled Buck DC-DC Converters
Min, Run; Chen, Chen; Zhang, Xiaodong; Zou, Xuecheng; Tong, Qiaoling; Zhang, Qiao
2014-01-01
In digital current mode controlled DC-DC converters, conventional current sensors might not provide isolation at a minimized price, power loss and size. Therefore, a current observer which can be realized based on the digital circuit itself, is a possible substitute. However, the observed current may diverge due to the parasitic resistors and the forward conduction voltage of the diode. Moreover, the divergence of the observed current will cause steady state errors in the output voltage. In this paper, an optimal current observer is proposed. It achieves the highest observation accuracy by compensating for all the known parasitic parameters. By employing the optimal current observer-based predictive current controller, a buck converter is implemented. The converter has a convergently and accurately observed inductor current, and shows preferable transient response than the conventional voltage mode controlled converter. Besides, costs, power loss and size are minimized since the strategy requires no additional hardware for current sensing. The effectiveness of the proposed optimal current observer is demonstrated experimentally. PMID:24854061
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; ...
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less
Awad, Ghada E A; Amer, Hassan; El-Gammal, Eman W; Helmy, Wafaa A; Esawy, Mona A; Elnashar, Magdy M M
2013-04-02
A sequential optimization strategy, based on statistical experimental designs, was employed to enhance the production of invertase by Lactobacillus brevis Mm-6 isolated from breast milk. First, a 2-level Plackett-Burman design was applied to screen the bioprocess parameters that significantly influence the invertase production. The second optimization step was performed using fractional factorial design in order to optimize the amounts of variables have the highest positive significant effect on the invertase production. A maximal enzyme activity of 1399U/ml was more than five folds the activity obtained using the basal medium. Invertase was immobilized onto grafted alginate beads to improve the enzyme's stability. Immobilization process increased the operational temperature from 30 to 60°C compared to the free enzyme. The reusability test proved the durability of the grafted alginate beads for 15 cycles with retention of 100% of the immobilized enzyme activity to be more convenient for industrial uses. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.
2016-03-01
The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.
Economic analysis of pilot-scale production of B-phycoerythrin.
Torres-Acosta, Mario A; Ruiz-Ruiz, Federico; Aguilar-Yáñez, José M; Benavides, Jorge; Rito-Palomares, Marco
2016-11-01
β-Phycoerythrin is a color protein with several applications, from food coloring to molecular labeling. Depending on the application, different purity is required, affecting production cost and price. Different production and purification strategies for B-phycoerythrin have been developed, the most studied are based on the production using Porphyridium cruentum and purified using chromatographic techniques or aqueous two-phase systems. The use of the latter can result in a less expensive and intensive recovery of the protein, but there is lack of a proper economic analysis to study the effect of using aqueous two-phase systems in a scaled-up process. This study analyzed the production of B-Phycoerythrin using real data obtained during the scale-up of a bioprocess using specialized software (BioSolve, Biopharm Services, UK). First, a sensitivity analysis was performed to identify critical parameters for the production cost, then a Monte Carlo analysis to emulate real processes by adding uncertainty to the identified parameters. Next, the bioprocess was analyzed to determine its financial attractiveness and possible optimization strategies were tested and discussed. Results show that aqueous two-phase systems retain their advantages of low cost and intensive recovery (54.56%); the costs of production per gram calculated (before titer optimization: US$15,709 and after optimization: US$2,374) allowed to obtain profit (in the range of US$millions in a 10-year period) for a potential company taking this production method by comparing the production cost against commercial prices. The bioprocess analyzed is a promising and profitable method for the generation of a highly purified B-phycoerythrin. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1472-1479, 2016. © 2016 American Institute of Chemical Engineers.
Environmental statistics and optimal regulation.
Sivak, David A; Thomson, Matt
2014-09-01
Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Reuse of assembly systems: a great ecological and economical potential for facility suppliers
NASA Astrophysics Data System (ADS)
Weule, Hartmut; Buchholz, Carsten
2001-02-01
In addition to the consumer goods, capital goods offer a great potential for ecological and economic optimization. In view of this fact the project WiMonDi (Re-Use of Assembly Systems as new Business Fields), started in September 1998, focuses a marketable Remanufacturing and Re-Use of modules and components of assembly systems by using technically and organizationally continuous concepts. The objective of the closed Facility-Management-System is to prolong the serviceable lifespan of assembly facilities through the organized dismantling, refurbishment and reconditioning of the assembly facilities as well as their components. Therefore, it is necessary to develop easible and methodical strategies to realize a workable Re-Use concept. Within the project the focus is based on the optimization of Re-Use-strategies - the direct Re-Use, the Re-Use including Refurbishment as well as Material Recycling. The decision for an optimal strategy depends on economical (e.g. residual value, cost/benefit of relevant processes, etc.), ecological (e.g. pollutant components /substances), etc.) and technical parameters (e.g. reliability, etc.). For the purpose to integrate the total cost-of-ownership of products or components, WiMonDi integrates the costs of the use of products as well as the Re-Use costs/benefits. To initiate the conception of new distribution and user models between the supplier and the user of assembly facilities the described approach is conducted in close cooperation between Industry and University.
Retrieval of Winter Wheat Leaf Area Index from Chinese GF-1 Satellite Data Using the PROSAIL Model
Li, He; Liu, Gaohuan; Liu, Qingsheng; Chen, Zhongxin; Huang, Chong
2018-01-01
Leaf area index (LAI) is one of the key biophysical parameters in crop structure. The accurate quantitative estimation of crop LAI is essential to verify crop growth and health. The PROSAIL radiative transfer model (RTM) is one of the most established methods for estimating crop LAI. In this study, a look-up table (LUT) based on the PROSAIL RTM was first used to estimate winter wheat LAI from GF-1 data, which accounted for some available prior knowledge relating to the distribution of winter wheat characteristics. Next, the effects of 15 LAI-LUT strategies with reflectance bands and 10 LAI-LUT strategies with vegetation indexes on the accuracy of the winter wheat LAI retrieval with different phenological stages were evaluated against in situ LAI measurements. The results showed that the LUT strategies of LAI-GNDVI were optimal and had the highest accuracy with a root mean squared error (RMSE) value of 0.34, and a coefficient of determination (R2) of 0.61 during the elongation stages, and the LUT strategies of LAI-Green were optimal with a RMSE of 0.74, and R2 of 0.20 during the grain-filling stages. The results demonstrated that the PROSAIL RTM had great potential in winter wheat LAI inversion with GF-1 satellite data and the performance could be improved by selecting the appropriate LUT inversion strategies in different growth periods. PMID:29642395
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Srijeeta; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com; Sen, Shrabani
We propose a strategy of using a stochastic optimization technique, namely, simulated annealing to design optimum laser pulses (both IR and UV) to achieve greater fluxes along the two dissociating channels (O{sup 18} + O{sup 16}O{sup 16} and O{sup 16} + O{sup 16}O{sup 18}) in O{sup 16}O{sup 16}O{sup 18} molecule. We show that the integrated fluxes obtained along the targeted dissociating channel is larger with the optimized pulse than with the unoptimized one. The flux ratios are also more impressive with the optimized pulse than with the unoptimized one. We also look at the evolution contours of the wavefunctions alongmore » the two channels with time after the actions of both the IR and UV pulses and compare the profiles for unoptimized (initial) and optimized fields for better understanding the results that we achieve. We also report the pulse parameters obtained as well as the final shapes they take.« less
NASA Astrophysics Data System (ADS)
Hu, K. M.; Li, Hua
2018-07-01
A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.
Tommasino, Paolo; Campolo, Domenico
2017-01-01
A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954
Rosskopf, Sandra; Leitner, Judith; Paster, Wolfgang; Morton, Laura T; Hagedoorn, Renate S; Steinberger, Peter; Heemskerk, Mirjam H M
2018-04-03
Adoptive T cell therapy using TCR transgenic autologous T cells has shown great potential for the treatment of tumor patients. Thorough characterization of genetically reprogrammed T cells is necessary to optimize treatment success. Here, we describe the generation of triple parameter reporter T cells based on the Jurkat 76 T cell line for the evaluation of TCR and chimeric antigen receptor functions as well as adoptive T cell strategies. This Jurkat subline is devoid of endogenous TCR alpha and TCR beta chains, thereby circumventing the problem of TCR miss-pairing and unexpected specificities. The resultant reporter cells allow simultaneous determination of the activity of the transcription factors NF-κB, NFAT and AP-1 that play key roles in T cell activation. Human TCRs directed against tumor and virus antigens were introduced and reporter responses were determined using tumor cell lines endogenously expressing the antigens of interest or via addition of antigenic peptides. Finally, we demonstrate that coexpression of adhesion molecules like CD2 and CD226 as well as CD28 chimeric receptors represents an effective strategy to augment the response of TCR-transgenic reporters to cells presenting cognate antigens.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.
Longenecker, R J; Galazyuk, A V
2012-11-16
Recently prepulse inhibition of the acoustic startle reflex (ASR) became a popular technique for tinnitus assessment in laboratory animals. This method confers a significant advantage over the previously used time-consuming behavioral approaches utilizing basic mechanisms of conditioning. Although this technique has been successfully used to assess tinnitus in different laboratory animals, many of the finer details of this methodology have not been described enough to be replicated, but are critical for tinnitus assessment. Here we provide detail description of key procedures and methodological issues that provide guidance for newcomers with the process of learning to correctly apply gap detection techniques for tinnitus assessment in laboratory animals. The major categories of these issues include: refinement of hardware for best performance, optimization of stimulus parameters, behavioral considerations, and identification of optimal strategies for data analysis. This article is part of a Special Issue entitled: Tinnitus Neuroscience. Copyright © 2012. Published by Elsevier B.V.