The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan
2016-05-01
Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.
An adaptive sharing elitist evolution strategy for multiobjective optimization.
Costa, Lino; Oliveira, Pedro
2003-01-01
Almost all approaches to multiobjective optimization are based on Genetic Algorithms (GAs), and implementations based on Evolution Strategies (ESs) are very rare. Thus, it is crucial to investigate how ESs can be extended to multiobjective optimization, since they have, in the past, proven to be powerful single objective optimizers. In this paper, we present a new approach to multiobjective optimization, based on ESs. We call this approach the Multiobjective Elitist Evolution Strategy (MEES) as it incorporates several mechanisms, like elitism, that improve its performance. When compared with other algorithms, MEES shows very promising results in terms of performance.
Intelligent fault recognition strategy based on adaptive optimized multiple centers
NASA Astrophysics Data System (ADS)
Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong
2018-06-01
For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Datta, Bithin
2011-07-01
Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.
NASA Astrophysics Data System (ADS)
Wang, Yan; Huang, Song; Ji, Zhicheng
2017-07-01
This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.
Bare-Bones Teaching-Learning-Based Optimization
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Sekiguchi, Masau; Igarashi, Ataru; Matsuda, Takahisa; Matsumoto, Minori; Sakamoto, Taku; Nakajima, Takeshi; Kakugawa, Yasuo; Yamamoto, Seiichiro; Saito, Hiroshi; Saito, Yutaka
2016-02-01
There have been few cost-effectiveness analyses of population-based colorectal cancer screening in Japan, and there is no consensus on the optimal use of total colonoscopy and the fecal immunochemical test for colorectal cancer screening with regard to cost-effectiveness and total colonoscopy workload. The present study aimed to examine the cost-effectiveness of colorectal cancer screening using Japanese data to identify the optimal use of total colonoscopy and fecal immunochemical test. We developed a Markov model to assess the cost-effectiveness of colorectal cancer screening offered to an average-risk population aged 40 years or over. The cost, quality-adjusted life-years and number of total colonoscopy procedures required were evaluated for three screening strategies: (i) a fecal immunochemical test-based strategy; (ii) a total colonoscopy-based strategy; (iii) a strategy of adding population-wide total colonoscopy at 50 years to a fecal immunochemical test-based strategy. All three strategies dominated no screening. Among the three, Strategy 1 was dominated by Strategy 3, and the incremental cost per quality-adjusted life-years gained for Strategy 2 against Strategies 1 and 3 were JPY 293 616 and JPY 781 342, respectively. Within the Japanese threshold (JPY 5-6 million per QALY gained), Strategy 2 was the most cost-effective, followed by Strategy 3; however, Strategy 2 required more than double the number of total colonoscopy procedures than the other strategies. The total colonoscopy-based strategy could be the most cost-effective for population-based colorectal cancer screening in Japan. However, it requires more total colonoscopy procedures than the other strategies. Depending on total colonoscopy capacity, the strategy of adding total colonoscopy for individuals at a specified age to a fecal immunochemical test-based screening may be an optimal solution. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Derivative Trade Optimizing Model Utilizing GP Based on Behavioral Finance Theory
NASA Astrophysics Data System (ADS)
Matsumura, Koki; Kawamoto, Masaru
This paper proposed a new technique which makes the strategy trees for the derivative (option) trading investment decision based on the behavioral finance theory and optimizes it using evolutionary computation, in order to achieve high profitability. The strategy tree uses a technical analysis based on a statistical, experienced technique for the investment decision. The trading model is represented by various technical indexes, and the strategy tree is optimized by the genetic programming(GP) which is one of the evolutionary computations. Moreover, this paper proposed a method using the prospect theory based on the behavioral finance theory to set psychological bias for profit and deficit and attempted to select the appropriate strike price of option for the higher investment efficiency. As a result, this technique produced a good result and found the effectiveness of this trading model by the optimized dealings strategy.
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch.
Pittig, Andre; van den Berg, Linda; Vervliet, Bram
2016-01-01
Extinction learning is a major mechanism for fear reduction by means of exposure. Current research targets innovative strategies to enhance fear extinction and thereby optimize exposure-based treatments for anxiety disorders. This selective review updates novel behavioral strategies that may provide cutting-edge clinical implications. Recent studies provide further support for two types of enhancement strategies. Procedural enhancement strategies implemented during extinction training translate to how exposure exercises may be conducted to optimize fear extinction. These strategies mostly focus on a maximized violation of dysfunctional threat expectancies and on reducing context and stimulus specificity of extinction learning. Flanking enhancement strategies target periods before and after extinction training and inform optimal preparation and post-processing of exposure exercises. These flanking strategies focus on the enhancement of learning in general, memory (re-)consolidation, and memory retrieval. Behavioral strategies to enhance fear extinction may provide powerful clinical applications to further maximize the efficacy of exposure-based interventions. However, future replications, mechanistic examinations, and translational studies are warranted to verify long-term effects and naturalistic utility. Future directions also comprise the interplay of optimized fear extinction with (avoidance) behavior and motivational antecedents of exposure.
NASA Astrophysics Data System (ADS)
Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin
2015-11-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Data analytics and optimization of an ice-based energy storage system for commercial buildings
Luo, Na; Hong, Tianzhen; Li, Hui; ...
2017-07-25
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Data analytics and optimization of an ice-based energy storage system for commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Na; Hong, Tianzhen; Li, Hui
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Quantitative learning strategies based on word networks
NASA Astrophysics Data System (ADS)
Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng
2018-02-01
Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Wei
2016-10-01
An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.
Incentive-compatible demand-side management for smart grids based on review strategies
NASA Astrophysics Data System (ADS)
Xu, Jie; van der Schaar, Mihaela
2015-12-01
Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion
Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317
Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Cost-effectiveness of angiographic imaging in isolated perimesencephalic subarachnoid hemorrhage.
Kalra, Vivek B; Wu, Xiao; Forman, Howard P; Malhotra, Ajay
2014-12-01
The purpose of this study is to perform a comprehensive cost-effectiveness analysis of all possible permutations of computed tomographic angiography (CTA) and digital subtraction angiography imaging strategies for both initial diagnosis and follow-up imaging in patients with perimesencephalic subarachnoid hemorrhage on noncontrast CT. Each possible imaging strategy was evaluated in a decision tree created with TreeAge Pro Suite 2014, with parameters derived from a meta-analysis of 40 studies and literature values. Base case and sensitivity analyses were performed to assess the cost-effectiveness of each strategy. A Monte Carlo simulation was conducted with distributional variables to evaluate the robustness of the optimal strategy. The base case scenario showed performing initial CTA with no follow-up angiographic studies in patients with perimesencephalic subarachnoid hemorrhage to be the most cost-effective strategy ($5422/quality adjusted life year). Using a willingness-to-pay threshold of $50 000/quality adjusted life year, the most cost-effective strategy based on net monetary benefit is CTA with no follow-up when the sensitivity of initial CTA is >97.9%, and CTA with CTA follow-up otherwise. The Monte Carlo simulation reported CTA with no follow-up to be the optimal strategy at willingness-to-pay of $50 000 in 99.99% of the iterations. Digital subtraction angiography, whether at initial diagnosis or as part of follow-up imaging, is never the optimal strategy in our model. CTA without follow-up imaging is the optimal strategy for evaluation of patients with perimesencephalic subarachnoid hemorrhage when modern CT scanners and a strict definition of perimesencephalic subarachnoid hemorrhage are used. Digital subtraction angiography and follow-up imaging are not optimal as they carry complications and associated costs. © 2014 American Heart Association, Inc.
Dynamic Portfolio Strategy Using Clustering Approach
Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333
Dynamic Portfolio Strategy Using Clustering Approach.
Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian
2017-01-01
The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
Falloon, Ian RH; Montero, Isabel; Sungur, Mehmet; Mastroeni, Antonino; Malm, Ulf; Economou, Marina; Grawe, Rolf; Harangozo, Judit; Mizuno, Masafumi; Murakami, Masaaki; Hager, Bert; Held, Tilo; Veltro, Franco; Gedye, Robyn
2004-01-01
According to clinical trials literature, every person with a schizophrenic disorder should be provided with the combination of optimal dose antipsychotics, strategies to educate himself and his carers to cope more efficiently with environmental stresses, cognitive-behavioural strategies to enhance work and social goals and reducing residual symptoms, and assertive home-based management to help prevent and resolve major social needs and crises, including recurrent episodes of symptoms. Despite strong scientific support for the routine implementation of these 'evidence-based' strategies, few services provide more than the pharmacotherapy component, and even this is seldom applied in the manner associated with the best results in the clinical trials. An international collaborative group, the Optimal Treatment Project (OTP), has been developed to promote the routine use of evidence-based strategies for schizophrenic disorders. A field trial was started to evaluate the benefits and costs of applying evidence-based strategies over a 5-year period. Centres have been set up in 18 countries. This paper summarises the outcome after 24 months of 'optimal' treatment in 603 cases who had reached this stage in their treatment by the end of 2002. On all measures the evidence-based OTP approach achieved more than double the benefits associated with current best practices. One half of recent cases had achieved full recovery from clinical and social morbidity. These advantages were even more striking in centres where a random-control design was used. PMID:16633471
Falloon, Ian R H; Montero, Isabel; Sungur, Mehmet; Mastroeni, Antonino; Malm, Ulf; Economou, Marina; Grawe, Rolf; Harangozo, Judit; Mizuno, Masafumi; Murakami, Masaaki; Hager, Bert; Held, Tilo; Veltro, Franco; Gedye, Robyn
2004-06-01
According to clinical trials literature, every person with a schizophrenic disorder should be provided with the combination of optimal dose antipsychotics, strategies to educate himself and his carers to cope more efficiently with environmental stresses, cognitive-behavioural strategies to enhance work and social goals and reducing residual symptoms, and assertive home-based management to help prevent and resolve major social needs and crises, including recurrent episodes of symptoms. Despite strong scientific support for the routine implementation of these 'evidence-based' strategies, few services provide more than the pharmacotherapy component, and even this is seldom applied in the manner associated with the best results in the clinical trials. An international collaborative group, the Optimal Treatment Project (OTP), has been developed to promote the routine use of evidence-based strategies for schizophrenic disorders. A field trial was started to evaluate the benefits and costs of applying evidence-based strategies over a 5-year period. Centres have been set up in 18 countries. This paper summarises the outcome after 24 months of 'optimal' treatment in 603 cases who had reached this stage in their treatment by the end of 2002. On all measures the evidence-based OTP approach achieved more than double the benefits associated with current best practices. One half of recent cases had achieved full recovery from clinical and social morbidity. These advantages were even more striking in centres where a random-control design was used.
Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
Optimal policy for value-based decision-making.
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-08-18
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Optimal policy for value-based decision-making
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-01-01
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638
Long-Run Savings and Investment Strategy Optimization
Gerrard, Russell; Guillén, Montserrat; Pérez-Marín, Ana M.
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration. PMID:24711728
Long-run savings and investment strategy optimization.
Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
Selective robust optimization: A new intensity-modulated proton therapy optimization strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yupeng; Niemela, Perttu; Siljamaki, Sami
2015-08-15
Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less
Williams, Perry J.; Kendall, William L.
2017-01-01
Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.
Strategies for the Optimization of Natural Leads to Anticancer Drugs or Drug Candidates
Xiao, Zhiyan; Morris-Natschke, Susan L.; Lee, Kuo-Hsiung
2015-01-01
Natural products have made significant contribution to cancer chemotherapy over the past decades and remain an indispensable source of molecular and mechanistic diversity for anticancer drug discovery. More often than not, natural products may serve as leads for further drug development rather than as effective anticancer drugs by themselves. Generally, optimization of natural leads into anticancer drugs or drug candidates should not only address drug efficacy, but also improve ADMET profiles and chemical accessibility associated with the natural leads. Optimization strategies involve direct chemical manipulation of functional groups, structure-activity relationship-directed optimization and pharmacophore-oriented molecular design based on the natural templates. Both fundamental medicinal chemistry principles (e.g., bio-isosterism) and state-of-the-art computer-aided drug design techniques (e.g., structure-based design) can be applied to facilitate optimization efforts. In this review, the strategies to optimize natural leads to anticancer drugs or drug candidates are illustrated with examples and described according to their purposes. Furthermore, successful case studies on lead optimization of bioactive compounds performed in the Natural Products Research Laboratories at UNC are highlighted. PMID:26359649
Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.
Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success. PMID:24892046
Cyber War Game in Temporal Networks
Cho, Jin-Hee; Gao, Jianxi
2016-01-01
In a cyber war game where a network is fully distributed and characterized by resource constraints and high dynamics, attackers or defenders often face a situation that may require optimal strategies to win the game with minimum effort. Given the system goal states of attackers and defenders, we study what strategies attackers or defenders can take to reach their respective system goal state (i.e., winning system state) with minimum resource consumption. However, due to the dynamics of a network caused by a node’s mobility, failure or its resource depletion over time or action(s), this optimization problem becomes NP-complete. We propose two heuristic strategies in a greedy manner based on a node’s two characteristics: resource level and influence based on k-hop reachability. We analyze complexity and optimality of each algorithm compared to optimal solutions for a small-scale static network. Further, we conduct a comprehensive experimental study for a large-scale temporal network to investigate best strategies, given a different environmental setting of network temporality and density. We demonstrate the performance of each strategy under various scenarios of attacker/defender strategies in terms of win probability, resource consumption, and system vulnerability. PMID:26859840
Yan, Bin-Jun; Guo, Zheng-Tai; Qu, Hai-Bin; Zhao, Bu-Chang; Zhao, Tao
2013-06-01
In this work, a feedforward control strategy basing on the concept of quality by design was established for the manufacturing process of traditional Chinese medicine to reduce the impact of the quality variation of raw materials on drug. In the research, the ethanol precipitation process of Danhong injection was taken as an application case of the method established. Box-Behnken design of experiments was conducted. Mathematical models relating the attributes of the concentrate, the process parameters and the quality of the supernatants produced were established. Then an optimization model for calculating the best process parameters basing on the attributes of the concentrate was built. The quality of the supernatants produced by ethanol precipitation with optimized and non-optimized process parameters were compared. The results showed that using the feedforward control strategy for process parameters optimization can control the quality of the supernatants effectively. The feedforward control strategy proposed can enhance the batch-to-batch consistency of the supernatants produced by ethanol precipitation.
Optimization of fuel-cell tram operation based on two dimension dynamic programming
NASA Astrophysics Data System (ADS)
Zhang, Wenbin; Lu, Xuecheng; Zhao, Jingsong; Li, Jianqiu
2018-02-01
This paper proposes an optimal control strategy based on the two-dimension dynamic programming (2DDP) algorithm targeting at minimizing operation energy consumption for a fuel-cell tram. The energy consumption model with the tram dynamics is firstly deduced. Optimal control problem are analyzed and the 2DDP strategy is applied to solve the problem. The optimal tram speed profiles are obtained for each interstation which consist of three stages: accelerate to the set speed with the maximum traction power, dynamically adjust to maintain a uniform speed and decelerate to zero speed with the maximum braking power at a suitable timing. The optimal control curves of all the interstations are connected with the parking time to form the optimal control method of the whole line. The optimized speed profiles are also simplified for drivers to follow.
Optimal domain decomposition strategies
NASA Technical Reports Server (NTRS)
Yoon, Yonghyun; Soni, Bharat K.
1995-01-01
The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
NASA Astrophysics Data System (ADS)
Cheng, Longjiu; Cai, Wensheng; Shao, Xueguang
2005-03-01
An energy-based perturbation and a new idea of taboo strategy are proposed for structural optimization and applied in a benchmark problem, i.e., the optimization of Lennard-Jones (LJ) clusters. It is proved that the energy-based perturbation is much better than the traditional random perturbation both in convergence speed and searching ability when it is combined with a simple greedy method. By tabooing the most wide-spread funnel instead of the visited solutions, the hit rate of other funnels can be significantly improved. Global minima of (LJ) clusters up to 200 atoms are found with high efficiency.
Reserve design to maximize species persistence
Robert G. Haight; Laurel E. Travis
2008-01-01
We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...
Optimal reconfiguration strategy for a degradable multimodule computing system
NASA Technical Reports Server (NTRS)
Lee, Yann-Hang; Shin, Kang G.
1987-01-01
The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.
Rational design of gene-based vaccines.
Barouch, Dan H
2006-01-01
Vaccine development has traditionally been an empirical discipline. Classical vaccine strategies include the development of attenuated organisms, whole killed organisms, and protein subunits, followed by empirical optimization and iterative improvements. While these strategies have been remarkably successful for a wide variety of viruses and bacteria, these approaches have proven more limited for pathogens that require cellular immune responses for their control. In this review, current strategies to develop and optimize gene-based vaccines are described, with an emphasis on novel approaches to improve plasmid DNA vaccines and recombinant adenovirus vector-based vaccines. Copyright 2006 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Yuanyuan; Gao, Guanjun; Zhang, Jie; Zhang, Kai; Chen, Sai; Yu, Xiaosong; Gu, Wanyi
2015-06-01
A simplex-method based optimizing (SMO) strategy is proposed to improve the transmission performance for dispersion uncompensated (DU) coherent optical systems with non-identical spans. Through analytical expression of quality of transmission (QoT), this strategy improves the Q factors effectively, while minimizing the number of erbium-doped optical fiber amplifier (EDFA) that needs to be optimized. Numerical simulations are performed for 100 Gb/s polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) channels over 10-span standard single mode fiber (SSMF) with randomly distributed span-lengths. Compared to the EDFA configurations with complete span loss compensation, the Q factor of the SMO strategy is improved by approximately 1 dB at the optimal transmitter launch power. Moreover, instead of adjusting the gains of all the EDFAs to their optimal value, the number of EDFA that needs to be adjusted for SMO is reduced from 8 to 2, showing much less tuning costs and almost negligible performance degradation.
NASA Astrophysics Data System (ADS)
Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo
2017-08-01
The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma
2013-01-01
Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822
Ghose, Sanchayita; Nagrath, Deepak; Hubbard, Brian; Brooks, Clayton; Cramer, Steven M
2004-01-01
The effect of an alternate strategy employing two different flowrates during loading was explored as a means of increasing system productivity in Protein-A chromatography. The effect of such a loading strategy was evaluated using a chromatographic model that was able to accurately predict experimental breakthrough curves for this Protein-A system. A gradient-based optimization routine is carried out to establish the optimal loading conditions (initial and final flowrates and switching time). The two-step loading strategy (using a higher flowrate during the initial stages followed by a lower flowrate) was evaluated for an Fc-fusion protein and was found to result in significant improvements in process throughput. In an extension of this optimization routine, dynamic loading capacity and productivity were simultaneously optimized using a weighted objective function, and this result was compared to that obtained with the single flowrate. Again, the dual-flowrate strategy was found to be superior.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Design of underwater robot lines based on a hybrid automatic optimization strategy
NASA Astrophysics Data System (ADS)
Lyu, Wenjing; Luo, Weilin
2014-09-01
In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal; the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body's minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Lee, Kyueun; Drekonja, Dimitri M; Enns, Eva A
2018-03-01
To determine the optimal antibiotic prophylaxis strategy for transrectal prostate biopsy (TRPB) as a function of the local antibiotic resistance profile. We developed a decision-analytic model to assess the cost-effectiveness of four antibiotic prophylaxis strategies: ciprofloxacin alone, ceftriaxone alone, ciprofloxacin and ceftriaxone in combination, and directed prophylaxis selection based on susceptibility testing. We used a payer's perspective and estimated the health care costs and quality-adjusted life-years (QALYs) associated with each strategy for a cohort of 66-year-old men undergoing TRPB. Costs and benefits were discounted at 3% annually. Base-case resistance prevalence was 29% to ciprofloxacin and 7% to ceftriaxone, reflecting susceptibility patterns observed at the Minneapolis Veterans Affairs Health Care System. Resistance levels were varied in sensitivity analysis. In the base case, single-agent prophylaxis strategies were dominated. Directed prophylaxis strategy was the optimal strategy at a willingness-to-pay threshold of $50,000/QALY gained. Relative to the directed prophylaxis strategy, the incremental cost-effectiveness ratio of the combination strategy was $123,333/QALY gained over the lifetime time horizon. In sensitivity analysis, single-agent prophylaxis strategies were preferred only at extreme levels of resistance. Directed or combination prophylaxis strategies were optimal for a wide range of resistance levels. Facilities using single-agent antibiotic prophylaxis strategies before TRPB should re-evaluate their strategies unless extremely low levels of antimicrobial resistance are documented. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A particle swarm optimization variant with an inner variable learning strategy.
Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Robust optimization based energy dispatch in smart grids considering demand uncertainty
NASA Astrophysics Data System (ADS)
Nassourou, M.; Puig, V.; Blesa, J.
2017-01-01
In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.
Development of a codon optimization strategy using the efor RED reporter gene as a test case
NASA Astrophysics Data System (ADS)
Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila
2018-04-01
Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.
Ye, Fei; Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm's performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem.
Lou, Xin Yuan; Sun, Lin Fu
2017-01-01
This paper proposes a new support vector machine (SVM) optimization scheme based on an improved chaotic fly optimization algorithm (FOA) with a mutation strategy to simultaneously perform parameter setting turning for the SVM and feature selection. In the improved FOA, the chaotic particle initializes the fruit fly swarm location and replaces the expression of distance for the fruit fly to find the food source. However, the proposed mutation strategy uses two distinct generative mechanisms for new food sources at the osphresis phase, allowing the algorithm procedure to search for the optimal solution in both the whole solution space and within the local solution space containing the fruit fly swarm location. In an evaluation based on a group of ten benchmark problems, the proposed algorithm’s performance is compared with that of other well-known algorithms, and the results support the superiority of the proposed algorithm. Moreover, this algorithm is successfully applied in a SVM to perform both parameter setting turning for the SVM and feature selection to solve real-world classification problems. This method is called chaotic fruit fly optimization algorithm (CIFOA)-SVM and has been shown to be a more robust and effective optimization method than other well-known methods, particularly in terms of solving the medical diagnosis problem and the credit card problem. PMID:28369096
Fundamental Limits of Delay and Security in Device-to-Device Communication
2013-01-01
systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a
The optimal imaging strategy for patients with stable chest pain: a cost-effectiveness analysis.
Genders, Tessa S S; Petersen, Steffen E; Pugliese, Francesca; Dastidar, Amardeep G; Fleischmann, Kirsten E; Nieman, Koen; Hunink, M G Myriam
2015-04-07
The optimal imaging strategy for patients with stable chest pain is uncertain. To determine the cost-effectiveness of different imaging strategies for patients with stable chest pain. Microsimulation state-transition model. Published literature. 60-year-old patients with a low to intermediate probability of coronary artery disease (CAD). Lifetime. The United States, the United Kingdom, and the Netherlands. Coronary computed tomography (CT) angiography, cardiac stress magnetic resonance imaging, stress single-photon emission CT, and stress echocardiography. Lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios. The strategy that maximized QALYs and was cost-effective in the United States and the Netherlands began with coronary CT angiography, continued with cardiac stress imaging if angiography found at least 50% stenosis in at least 1 coronary artery, and ended with catheter-based coronary angiography if stress imaging induced ischemia of any severity. For U.K. men, the preferred strategy was optimal medical therapy without catheter-based coronary angiography if coronary CT angiography found only moderate CAD or stress imaging induced only mild ischemia. In these strategies, stress echocardiography was consistently more effective and less expensive than other stress imaging tests. For U.K. women, the optimal strategy was stress echocardiography followed by catheter-based coronary angiography if echocardiography induced mild or moderate ischemia. Results were sensitive to changes in the probability of CAD and assumptions about false-positive results. All cardiac stress imaging tests were assumed to be available. Exercise electrocardiography was included only in a sensitivity analysis. Differences in QALYs among strategies were small. Coronary CT angiography is a cost-effective triage test for 60-year-old patients who have nonacute chest pain and a low to intermediate probability of CAD. Erasmus University Medical Center.
Transaction fees and optimal rebalancing in the growth-optimal portfolio
NASA Astrophysics Data System (ADS)
Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng
2011-05-01
The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
A market-based optimization approach to sensor and resource management
NASA Astrophysics Data System (ADS)
Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.
2006-05-01
Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.
Giordano, Carmen; Albani, Diego; Gloria, Antonio; Tunesi, Marta; Batelli, Sara; Russo, Teresa; Forloni, Gianluigi; Ambrosio, Luigi; Cigada, Alberto
2009-12-01
This review presents two intriguing multidisciplinary strategies that might make the difference in the treatment of neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. The first proposed strategy is based on the controlled delivery of recombinant proteins known to play a key role in these neurodegenerative disorders that are released in situ by optimized polymer-based systems. The second strategy is the use of engineered cells, encapsulated and delivered in situ by suitable polymer-based systems, that act as drug reservoirs and allow the delivery of selected molecules to be used in the treatment of Alzheimer's and Parkinson's diseases. In both these scenarios, the design and development of optimized polymer-based drug delivery and cell housing systems for central nervous system applications represent a key requirement. Materials science provides suitable hydrogel-based tools to be optimized together with suitably designed recombinant proteins or drug delivering-cells that, once in situ, can provide an effective treatment for these neurodegenerative disorders. In this scenario, only interdisciplinary research that fully integrates biology, biochemistry, medicine and materials science can provide a springboard for the development of suitable therapeutic tools, not only for the treatment of Alzheimer's and Parkinson's diseases but also, prospectively, for a wide range of severe neurodegenerative disorders.
NASA Astrophysics Data System (ADS)
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
2016-06-01
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
NASA Astrophysics Data System (ADS)
Wu, Yun-jie; Li, Guo-fei
2018-01-01
Based on sliding mode extended state observer (SMESO) technique, an adaptive disturbance compensation finite control set optimal control (FCS-OC) strategy is proposed for permanent magnet synchronous motor (PMSM) system driven by voltage source inverter (VSI). So as to improve robustness of finite control set optimal control strategy, a SMESO is proposed to estimate the output-effect disturbance. The estimated value is fed back to finite control set optimal controller for implementing disturbance compensation. It is indicated through theoretical analysis that the designed SMESO could converge in finite time. The simulation results illustrate that the proposed adaptive disturbance compensation FCS-OC possesses better dynamical response behavior in the presence of disturbance.
Review of Reactive Power Dispatch Strategies for Loss Minimization in a DFIG-based Wind Farm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baohua; Hu, Weihao; Hou, Peng
This study reviews and compares the performance of reactive power dispatch strategies for the loss minimization of Doubly Fed Induction Generator (DFIG)-based Wind Farms (WFs). Twelve possible combinations of three WF level reactive power dispatch strategies and four Wind Turbine (WT) level reactive power control strategies are investigated. All of the combined strategies are formulated based on the comprehensive loss models of WFs, including the loss models of DFIGs, converters, filters, transformers, and cables of the collection system. Optimization problems are solved by a Modified Particle Swarm Optimization (MPSO) algorithm. The effectiveness of these strategies is evaluated by simulations onmore » a carefully designed WF under a series of cases with different wind speeds and reactive power requirements of the WF. The wind speed at each WT inside the WF is calculated using the Jensen wake model. The results show that the best reactive power dispatch strategy for loss minimization comes when the WF level strategy and WT level control are coordinated and the losses from each device in the WF are considered in the objective.« less
Review of Reactive Power Dispatch Strategies for Loss Minimization in a DFIG-based Wind Farm
Zhang, Baohua; Hu, Weihao; Hou, Peng; ...
2017-06-27
This study reviews and compares the performance of reactive power dispatch strategies for the loss minimization of Doubly Fed Induction Generator (DFIG)-based Wind Farms (WFs). Twelve possible combinations of three WF level reactive power dispatch strategies and four Wind Turbine (WT) level reactive power control strategies are investigated. All of the combined strategies are formulated based on the comprehensive loss models of WFs, including the loss models of DFIGs, converters, filters, transformers, and cables of the collection system. Optimization problems are solved by a Modified Particle Swarm Optimization (MPSO) algorithm. The effectiveness of these strategies is evaluated by simulations onmore » a carefully designed WF under a series of cases with different wind speeds and reactive power requirements of the WF. The wind speed at each WT inside the WF is calculated using the Jensen wake model. The results show that the best reactive power dispatch strategy for loss minimization comes when the WF level strategy and WT level control are coordinated and the losses from each device in the WF are considered in the objective.« less
Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.
Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M
2017-01-01
In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.
A simulation-optimization-based decision support tool for mitigating traffic congestion.
DOT National Transportation Integrated Search
2009-12-01
"Traffic congestion has grown considerably in the United States over the past twenty years. In this paper, we develop : a robust decision support tool based on simulation optimization to evaluate and recommend congestion-mitigation : strategies to tr...
Active model-based balancing strategy for self-reconfigurable batteries
NASA Astrophysics Data System (ADS)
Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter
2016-08-01
This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.
Siriwardena-Mahanama, Buddhima N.; Allen, Matthew J.
2013-01-01
This review describes recent advances in strategies for tuning the water-exchange rates of contrast agents for magnetic resonance imaging (MRI). Water-exchange rates play a critical role in determining the efficiency of contrast agents; consequently, optimization of water-exchange rates, among other parameters, is necessary to achieve high efficiencies. This need has resulted in extensive research efforts to modulate water-exchange rates by chemically altering the coordination environments of the metal complexes that function as contrast agents. The focus of this review is coordination-chemistry-based strategies used to tune the water-exchange rates of lanthanide(III)-based contrast agents for MRI. Emphasis will be given to results published in the 21st century, as well as implications of these strategies on the design of contrast agents. PMID:23921796
NASA Astrophysics Data System (ADS)
Cai, Xiaohui; Liu, Yang; Ren, Zhiming
2018-06-01
Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.
Development of industry-based strategies for motivating seat-belt usage
DOT National Transportation Integrated Search
1983-03-01
A variety of incentive-based programs to motivate safety belt use were tested during the 18-month grant period in order to define optimal incentive strategies for particular corporate settings. Initial programs provoked important research questions w...
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2001-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
The art of war: beyond memory-one strategies in population games.
Lee, Christopher; Harper, Marc; Fryer, Dashiell
2015-01-01
We show that the history of play in a population game contains exploitable information that can be successfully used by sophisticated strategies to defeat memory-one opponents, including zero determinant strategies. The history allows a player to label opponents by their strategies, enabling a player to determine the population distribution and to act differentially based on the opponent's strategy in each pairwise interaction. For the Prisoner's Dilemma, these advantages lead to the natural formation of cooperative coalitions among similarly behaving players and eventually to unilateral defection against opposing player types. We show analytically and empirically that optimal play in population games depends strongly on the population distribution. For example, the optimal strategy for a minority player type against a resident TFT population is ALLC, while for a majority player type the optimal strategy versus TFT players is ALLD. Such behaviors are not accessible to memory-one strategies. Drawing inspiration from Sun Tzu's the Art of War, we implemented a non-memory-one strategy for population games based on techniques from machine learning and statistical inference that can exploit the history of play in this manner. Via simulation we find that this strategy is essentially uninvadable and can successfully invade (significantly more likely than a neutral mutant) essentially all known memory-one strategies for the Prisoner's Dilemma, including ALLC (always cooperate), ALLD (always defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant (ZD) strategies, including extortionate and generous strategies.
Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2012-03-01
In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.
Han, Dianwei; Zhang, Jun; Tang, Guiliang
2012-01-01
An accurate prediction of the pre-microRNA secondary structure is important in miRNA informatics. Based on a recently proposed model, nucleotide cyclic motifs (NCM), to predict RNA secondary structure, we propose and implement a Modified NCM (MNCM) model with a physics-based scoring strategy to tackle the problem of pre-microRNA folding. Our microRNAfold is implemented using a global optimal algorithm based on the bottom-up local optimal solutions. Our experimental results show that microRNAfold outperforms the current leading prediction tools in terms of True Negative rate, False Negative rate, Specificity, and Matthews coefficient ratio.
NASA Astrophysics Data System (ADS)
Ma, Lin; Wang, Kexin; Xu, Zuhua; Shao, Zhijiang; Song, Zhengyu; Biegler, Lorenz T.
2018-05-01
This study presents a trajectory optimization framework for lunar rover performing vertical takeoff vertical landing (VTVL) maneuvers in the presence of terrain using variable-thrust propulsion. First, a VTVL trajectory optimization problem with three-dimensional kinematics and dynamics model, boundary conditions, and path constraints is formulated. Then, a finite-element approach transcribes the formulated trajectory optimization problem into a nonlinear programming (NLP) problem solved by a highly efficient NLP solver. A homotopy-based backtracking strategy is applied to enhance the convergence in solving the formulated VTVL trajectory optimization problem. The optimal thrust solution typically has a "bang-bang" profile considering that bounds are imposed on the magnitude of engine thrust. An adaptive mesh refinement strategy based on a constant Hamiltonian profile is designed to address the difficulty in locating the breakpoints in the thrust profile. Four scenarios are simulated. Simulation results indicate that the proposed trajectory optimization framework has sufficient adaptability to handle VTVL missions efficiently.
A framework for designing and analyzing binary decision-making strategies in cellular systems†
Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.
2015-01-01
Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy.
Guo, Haiying; Xing, Yizhan; Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen; Li, Yuhong
2018-01-01
Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells.
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy
Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen
2018-01-01
Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells. PMID:29383288
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
Bourbonnais, Anne; Ducharme, Francine; Landreville, Philippe; Michaud, Cécile; Gauthier, Marie-Andrée; Lavallée, Marie-Hélène
2018-03-01
Few studies have been conducted on strategies to promote the implementation of complex interventions in nursing homes (NHs). This article presents a pilot study intended to assess the strategies that would enable the optimal implementation of a complex intervention approach in NHs based on the meanings of screams of older people living with Alzheimer's disease. An action research approach was used with 19 formal and family caregivers from five NHs. Focus groups and individual interviews were held to assess different implementation strategies. A number of challenges were identified, as were strategies to overcome them. These latter included interactive training, intervention design, and external support. This study shows the feasibility of implementing a complex intervention to optimize older people's well-being. The article shares strategies that may promote the implementation of these types of interventions in NHs.
Yu, Xiang; Zhang, Xueqing
2017-01-01
Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle's personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run.
NASA Astrophysics Data System (ADS)
Wang, Pan; Zhang, Yi; Yan, Dong
2018-05-01
Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.
Optimal Electricity Charge Strategy Based on Price Elasticity of Demand for Users
NASA Astrophysics Data System (ADS)
Li, Xin; Xu, Daidai; Zang, Chuanzhi
The price elasticity is very important for the prediction of electricity demand. This paper mainly establishes the price elasticity coefficient for electricity in single period and inter-temporal. Then, a charging strategy is established based on these coefficients. To evaluate the strategy proposed, simulations of the two elastic coefficients are carried out based on the history data of a certain region.
A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an
2014-01-01
To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems. PMID:24574905
A novel harmony search algorithm based on teaching-learning strategies for 0-1 knapsack problems.
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an
2014-01-01
To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation) are employed to improve the performance of HS algorithm. Another improvement in HSTL method is that the dynamic strategies are adopted to change the parameters, which maintains the proper balance effectively between global exploration power and local exploitation power. Finally, simulation experiments with 13 knapsack problems show that the HSTL algorithm can be an efficient alternative for solving 0-1 knapsack problems.
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
Zhang, Zili; Gao, Chao; Lu, Yuxiao; Liu, Yuxin; Liang, Mingxin
2016-01-01
Bi-objective Traveling Salesman Problem (bTSP) is an important field in the operations research, its solutions can be widely applied in the real world. Many researches of Multi-objective Ant Colony Optimization (MOACOs) have been proposed to solve bTSPs. However, most of MOACOs suffer premature convergence. This paper proposes an optimization strategy for MOACOs by optimizing the initialization of pheromone matrix with the prior knowledge of Physarum-inspired Mathematical Model (PMM). PMM can find the shortest route between two nodes based on the positive feedback mechanism. The optimized algorithms, named as iPM-MOACOs, can enhance the pheromone in the short paths and promote the search ability of ants. A series of experiments are conducted and experimental results show that the proposed strategy can achieve a better compromise solution than the original MOACOs for solving bTSPs. PMID:26751562
Zhang, Zili; Gao, Chao; Lu, Yuxiao; Liu, Yuxin; Liang, Mingxin
2016-01-01
Bi-objective Traveling Salesman Problem (bTSP) is an important field in the operations research, its solutions can be widely applied in the real world. Many researches of Multi-objective Ant Colony Optimization (MOACOs) have been proposed to solve bTSPs. However, most of MOACOs suffer premature convergence. This paper proposes an optimization strategy for MOACOs by optimizing the initialization of pheromone matrix with the prior knowledge of Physarum-inspired Mathematical Model (PMM). PMM can find the shortest route between two nodes based on the positive feedback mechanism. The optimized algorithms, named as iPM-MOACOs, can enhance the pheromone in the short paths and promote the search ability of ants. A series of experiments are conducted and experimental results show that the proposed strategy can achieve a better compromise solution than the original MOACOs for solving bTSPs.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
An effective rumor-containing strategy
NASA Astrophysics Data System (ADS)
Pan, Cheng; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan
2018-06-01
False rumors can lead to huge economic losses or/and social instability. Hence, mitigating the impact of bogus rumors is of primary importance. This paper focuses on the problem of how to suppress a false rumor by use of the truth. Based on a set of rational hypotheses and a novel rumor-truth mixed spreading model, the effectiveness and cost of a rumor-containing strategy are quantified, respectively. On this basis, the original problem is modeled as a constrained optimization problem (the RC model), in which the independent variable and the objective function represent a rumor-containing strategy and the effectiveness of a rumor-containing strategy, respectively. The goal of the optimization problem is to find the most effective rumor-containing strategy subject to a limited rumor-containing budget. Some optimal rumor-containing strategies are given by solving their respective RC models. The influence of different factors on the highest cost effectiveness of a RC model is illuminated through computer experiments. The results obtained are instructive to develop effective rumor-containing strategies.
A simple approach to optimal control of invasive species.
Hastings, Alan; Hall, Richard J; Taylor, Caz M
2006-12-01
The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
2018-01-28
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2000-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Qi, Xuewei; Wu, Guoyuan; Boriboonsomsin, Kanok; ...
2016-01-01
Plug-in hybrid electric vehicles (PHEVs) show great promise in reducing transportation-related fossil fuel consumption and greenhouse gas emissions. Designing an efficient energy management system (EMS) for PHEVs to achieve better fuel economy has been an active research topic for decades. Most of the advanced systems rely either on a priori knowledge of future driving conditions to achieve the optimal but not real-time solution (e.g., using a dynamic programming strategy) or on only current driving situations to achieve a real-time but nonoptimal solution (e.g., rule-based strategy). This paper proposes a reinforcement learning–based real-time EMS for PHEVs to address the trade-off betweenmore » real-time performance and optimal energy savings. The proposed model can optimize the power-split control in real time while learning the optimal decisions from historical driving cycles. Here, a case study on a real-world commute trip shows that about a 12% fuel saving can be achieved without considering charging opportunities; further, an 8% fuel saving can be achieved when charging opportunities are considered, compared with the standard binary mode control strategy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Xuewei; Wu, Guoyuan; Boriboonsomsin, Kanok
Plug-in hybrid electric vehicles (PHEVs) show great promise in reducing transportation-related fossil fuel consumption and greenhouse gas emissions. Designing an efficient energy management system (EMS) for PHEVs to achieve better fuel economy has been an active research topic for decades. Most of the advanced systems rely either on a priori knowledge of future driving conditions to achieve the optimal but not real-time solution (e.g., using a dynamic programming strategy) or on only current driving situations to achieve a real-time but nonoptimal solution (e.g., rule-based strategy). This paper proposes a reinforcement learning–based real-time EMS for PHEVs to address the trade-off betweenmore » real-time performance and optimal energy savings. The proposed model can optimize the power-split control in real time while learning the optimal decisions from historical driving cycles. Here, a case study on a real-world commute trip shows that about a 12% fuel saving can be achieved without considering charging opportunities; further, an 8% fuel saving can be achieved when charging opportunities are considered, compared with the standard binary mode control strategy.« less
Cloud computing task scheduling strategy based on differential evolution and ant colony optimization
NASA Astrophysics Data System (ADS)
Ge, Junwei; Cai, Yu; Fang, Yiqiu
2018-05-01
This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.
Research on the optimization strategy of web search engine based on data mining
NASA Astrophysics Data System (ADS)
Chen, Ronghua
2018-04-01
With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.
Did recent world record marathon runners employ optimal pacing strategies?
Angus, Simon D
2014-01-01
We apply statistical analysis of high frequency (1 km) split data for the most recent two world-record marathon runs: Run 1 (2:03:59, 28 September 2008) and Run 2 (2:03:38, 25 September 2011). Based on studies in the endurance cycling literature, we develop two principles to approximate 'optimal' pacing in the field marathon. By utilising GPS and weather data, we test, and then de-trend, for each athlete's field response to gradient and headwind on course, recovering standardised proxies for power-based pacing traces. The resultant traces were analysed to ascertain if either runner followed optimal pacing principles; and characterise any deviations from optimality. Whereas gradient was insignificant, headwind was a significant factor in running speed variability for both runners, with Runner 2 targeting the (optimal) parallel variation principle, whilst Runner 1 did not. After adjusting for these responses, neither runner followed the (optimal) 'even' power pacing principle, with Runner 2's macro-pacing strategy fitting a sinusoidal oscillator with exponentially expanding envelope whilst Runner 1 followed a U-shaped, quadratic form. The study suggests that: (a) better pacing strategy could provide elite marathon runners with an economical pathway to significant performance improvements at world-record level; and (b) the data and analysis herein is consistent with a complex-adaptive model of power regulation.
Besmer, Michael D.; Hammes, Frederik; Sigrist, Jürg A.; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies. PMID:29213255
Besmer, Michael D; Hammes, Frederik; Sigrist, Jürg A; Ort, Christoph
2017-01-01
Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies.
NASA Astrophysics Data System (ADS)
Biglar, Mojtaba; Mirdamadi, Hamid Reza; Danesh, Mohammad
2014-02-01
In this study, the active vibration control and configurational optimization of a cylindrical shell are analyzed by using piezoelectric transducers. The piezoelectric patches are attached to the surface of the cylindrical shell. The Rayleigh-Ritz method is used for deriving dynamic modeling of cylindrical shell and piezoelectric sensors and actuators based on the Donnel-Mushtari shell theory. The major goal of this study is to find the optimal locations and orientations of piezoelectric sensors and actuators on the cylindrical shell. The optimization procedure is designed based on desired controllability and observability of each contributed and undesired mode. Further, in order to limit spillover effects, the residual modes are taken into consideration. The optimization variables are the positions and orientations of piezoelectric patches. Genetic algorithm is utilized to evaluate the optimal configurations. In this article, for improving the maximum power and capacity of actuators for amplitude depreciation of negative velocity feedback strategy, we have proposed a new control strategy, called "Saturated Negative Velocity Feedback Rule (SNVF)". The numerical results show that the optimization procedure is effective for vibration reduction, and specifically, by locating actuators and sensors in their optimal locations and orientations, the vibrations of cylindrical shell are suppressed more quickly.
Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming
2017-02-01
The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.
Strategies for global optimization in photonics design.
Vukovic, Ana; Sewell, Phillip; Benson, Trevor M
2010-10-01
This paper reports on two important issues that arise in the context of the global optimization of photonic components where large problem spaces must be investigated. The first is the implementation of a fast simulation method and associated matrix solver for assessing particular designs and the second, the strategies that a designer can adopt to control the size of the problem design space to reduce runtimes without compromising the convergence of the global optimization tool. For this study an analytical simulation method based on Mie scattering and a fast matrix solver exploiting the fast multipole method are combined with genetic algorithms (GAs). The impact of the approximations of the simulation method on the accuracy and runtime of individual design assessments and the consequent effects on the GA are also examined. An investigation of optimization strategies for controlling the design space size is conducted on two illustrative examples, namely, 60° and 90° waveguide bends based on photonic microstructures, and their effectiveness is analyzed in terms of a GA's ability to converge to the best solution within an acceptable timeframe. Finally, the paper describes some particular optimized solutions found in the course of this work.
Evolving effective behaviours to interact with tag-based populations
NASA Astrophysics Data System (ADS)
Yucel, Osman; Crawford, Chad; Sen, Sandip
2015-07-01
Tags and other characteristics, externally perceptible features that are consistent among groups of animals or humans, can be used by others to determine appropriate response strategies in societies. This usage of tags can be extended to artificial environments, where agents can significantly reduce cognitive effort spent on appropriate strategy choice and behaviour selection by reusing strategies for interacting with new partners based on their tags. Strategy selection mechanisms developed based on this idea have successfully evolved stable cooperation in games such as the Prisoner's Dilemma game but relies upon payoff sharing and matching methods that limit the applicability of the tag framework. Our goal is to develop a general classification and behaviour selection approach based on the tag framework. We propose and evaluate alternative tag matching and adaptation schemes for a new, incoming individual to select appropriate behaviour against any population member of an existing, stable society. Our proposed approach allows agents to evolve both the optimal tag for the environment as well as appropriate strategies for existing agent groups. We show that these mechanisms will allow for robust selection of optimal strategies by agents entering a stable society and analyse the various environments where this approach is effective.
Cloning strategy for producing brush-forming protein-based polymers.
Henderson, Douglas B; Davis, Richey M; Ducker, William A; Van Cott, Kevin E
2005-01-01
Brush-forming polymers are being used in a variety of applications, and by using recombinant DNA technology, there exists the potential to produce protein-based polymers that incorporate unique structures and functions in these brush layers. Despite this potential, production of protein-based brush-forming polymers is not routinely performed. For the design and production of new protein-based polymers with optimal brush-forming properties, it would be desirable to have a cloning strategy that allows an iterative approach wherein the protein based-polymer product can be produced and evaluated, and then if necessary, it can be sequentially modified in a controlled manner to obtain optimal surface density and brush extension. In this work, we report on the development of a cloning strategy intended for the production of protein-based brush-forming polymers. This strategy is based on the assembly of modules of DNA that encode for blocks of protein-based polymers into a commercially available expression vector; there is no need for custom-modified vectors and no need for intermediate cloning vectors. Additionally, because the design of new protein-based biopolymers can be an iterative process, our method enables sequential modification of a protein-based polymer product. With at least 21 bacterial expression vectors and 11 yeast expression vectors compatible with this strategy, there are a number of options available for production of protein-based polymers. It is our intent that this strategy will aid in advancing the production of protein-based brush-forming polymers.
A study of optimization techniques in HDR brachytherapy for the prostate
NASA Astrophysics Data System (ADS)
Pokharel, Ghana Shyam
Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
Multi-strategy coevolving aging particle optimization.
Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante
2014-02-01
We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
Optimal strategies for electric energy contract decision making
NASA Astrophysics Data System (ADS)
Song, Haili
2000-10-01
The power industry restructuring in various countries in recent years has created an environment where trading of electric energy is conducted in a market environment. In such an environment, electric power companies compete for the market share through spot and bilateral markets. Being profit driven, electric power companies need to make decisions on spot market bidding, contract evaluation, and risk management. New methods and software tools are required to meet these upcoming needs. In this research, bidding strategy and contract pricing are studied from a market participant's viewpoint; new methods are developed to guide a market participant in spot and bilateral market operation. A supplier's spot market bidding decision is studied. Stochastic optimization is formulated to calculate a supplier's optimal bids in a single time period. This decision making problem is also formulated as a Markov Decision Process. All the competitors are represented by their bidding parameters with corresponding probabilities. A systematic method is developed to calculate transition probabilities and rewards. The optimal strategy is calculated to maximize the expected reward over a planning horizon. Besides the spot market, a power producer can also trade in the bilateral markets. Bidding strategies in a bilateral market are studied with game theory techniques. Necessary and sufficient conditions of Nash Equilibrium (NE) bidding strategy are derived based on the generators' cost and the loads' willingness to pay. The study shows that in any NE, market efficiency is achieved. Furthermore, all Nash equilibria are revenue equivalent for the generators. The pricing of "Flexible" contracts, which allow delivery flexibility over a period of time with a fixed total amount of electricity to be delivered, is analyzed based on the no-arbitrage pricing principle. The proposed algorithm calculates the price based on the optimality condition of the stochastic optimization formulation. Simulation examples illustrate the tradeoffs between prices and scheduling flexibility. Spot bidding and contract pricing are not independent decision processes. The interaction between spot bidding and contract evaluation is demonstrated with game theory equilibrium model and market simulation results. It leads to the conclusion that a market participant's contract decision making needs to be further investigated as an integrated optimization formulation.
Wang, Mingyu
2006-04-01
An innovative management strategy is proposed for optimized and integrated environmental management for regional or national groundwater contamination prevention and restoration allied with consideration of sustainable development. This management strategy accounts for availability of limited resources, human health and ecological risks from groundwater contamination, costs for groundwater protection measures, beneficial uses and values from groundwater protection, and sustainable development. Six different categories of costs are identified with regard to groundwater prevention and restoration. In addition, different environmental impacts from groundwater contamination including human health and ecological risks are individually taken into account. System optimization principles are implemented to accomplish decision-makings on the optimal resources allocations of the available resources or budgets to different existing contaminated sites and projected contamination sites for a maximal risk reduction. Established management constraints such as budget limitations under different categories of costs are satisfied at the optimal solution. A stepwise optimization process is proposed in which the first step is to select optimally a limited number of sites where remediation or prevention measures will be taken, from all the existing contaminated and projected contamination sites, based on a total regionally or nationally available budget in a certain time frame such as 10 years. Then, several optimization steps determined year-by-year optimal distributions of the available yearly budgets for those selected sites. A hypothetical case study is presented to demonstrate a practical implementation of the management strategy. Several issues pertaining to groundwater contamination exposure and risk assessments and remediation cost evaluations are briefly discussed for adequately understanding implementations of the management strategy.
Optimization of cooling strategy and seeding by FBRM analysis of batch crystallization
NASA Astrophysics Data System (ADS)
Zhang, Dejiang; Liu, Lande; Xu, Shijie; Du, Shichao; Dong, Weibing; Gong, Junbo
2018-03-01
A method is presented for optimizing the cooling strategy and seed loading simultaneously. Focused beam reflectance measurement (FBRM) was used to determine the approximating optimal cooling profile. Using these results in conjunction with constant growth rate assumption, modified Mullin-Nyvlt trajectory could be calculated. This trajectory could suppress secondary nucleation and has the potential to control product's polymorph distribution. Comparing with linear and two step cooling, modified Mullin-Nyvlt trajectory have a larger size distribution and a better morphology. Based on the calculating results, the optimized seed loading policy was also developed. This policy could be useful for guiding the batch crystallization process.
NASA Astrophysics Data System (ADS)
Meng, Fei; Tao, Gang; Zhang, Tao; Hu, Yihuai; Geng, Peng
2015-08-01
Shifting quality is a crucial factor in all parts of the automobile industry. To ensure an optimal gear shifting strategy with best fuel economy for a stepped automatic transmission, the controller should be designed to meet the challenge of lacking of a feedback sensor to measure the relevant variables. This paper focuses on a new kind of automatic transmission using proportional solenoid valve to control the clutch pressure, a speed difference of the clutch based control strategy is designed for the shift control during the inertia phase. First, the mechanical system is shown and the system dynamic model is built. Second, the control strategy is designed based on the characterization analysis of models which are derived from dynamics of the drive line and electro-hydraulic actuator. Then, the controller uses conventional Proportional-Integral-Derivative control theory, and a robust two-degree-of-freedom controller is also carried out to determine the optimal control parameters to further improve the system performance. Finally, the designed control strategy with different controller is implemented on a simulation model. The compared results show that the speed difference of clutch can track the desired trajectory well and improve the shift quality effectively.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
NASA Astrophysics Data System (ADS)
Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.
2017-12-01
Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.
NASA Astrophysics Data System (ADS)
Clemens, Joshua William
Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.
Educational Tool for Optimal Controller Tuning Using Evolutionary Strategies
ERIC Educational Resources Information Center
Carmona Morales, D.; Jimenez-Hornero, J. E.; Vazquez, F.; Morilla, F.
2012-01-01
In this paper, an optimal tuning tool is presented for control structures based on multivariable proportional-integral-derivative (PID) control, using genetic algorithms as an alternative to traditional optimization algorithms. From an educational point of view, this tool provides students with the necessary means to consolidate their knowledge on…
NASA Astrophysics Data System (ADS)
Chen, CHAI; Yiik Diew, WONG
2017-02-01
This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.
ERIC Educational Resources Information Center
Kim, Jieun; Ryu, Hokyoung; Katuk, Norliza; Wang, Ruili; Choi, Gyunghyun
2014-01-01
The present study aims to show if a skill-challenge balancing (SCB) instruction strategy can assist learners to motivationally engage in computer-based learning. Csikszentmihalyi's flow theory (self-control, curiosity, focus of attention, and intrinsic interest) was applied to an account of the optimal learning experience in SCB-based learning…
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
A novel optimal coordinated control strategy for the updated robot system for single port surgery.
Bai, Weibang; Cao, Qixin; Leng, Chuntao; Cao, Yang; Fujie, Masakatsu G; Pan, Tiewen
2017-09-01
Research into robotic systems for single port surgery (SPS) has become widespread around the world in recent years. A new robot arm system for SPS was developed, but its positioning platform and other hardware components were not efficient. Special features of the developed surgical robot system make good teleoperation with safety and efficiency difficult. A robot arm is combined and used as new positioning platform, and the remote center motion is realized by a new method using active motion control. A new mapping strategy based on kinematics computation and a novel optimal coordinated control strategy based on real-time approaching to a defined anthropopathic criterion configuration that is referred to the customary ease state of human arms and especially the configuration of boxers' habitual preparation posture are developed. The hardware components, control architecture, control system, and mapping strategy of the robotic system has been updated. A novel optimal coordinated control strategy is proposed and tested. The new robot system can be more dexterous, intelligent, convenient and safer for preoperative positioning and intraoperative adjustment. The mapping strategy can achieve good following and representation for the slave manipulator arms. And the proposed novel control strategy can enable them to complete tasks with higher maneuverability, lower possibility of self-interference and singularity free while teleoperating. Copyright © 2017 John Wiley & Sons, Ltd.
Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica
2016-09-01
Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.
Optimal H1N1 vaccination strategies based on self-interest versus group interest.
Shim, Eunha; Meyers, Lauren Ancel; Galvani, Alison P
2011-02-25
Influenza vaccination is vital for reducing H1N1 infection-mediated morbidity and mortality. To reduce transmission and achieve herd immunity during the initial 2009-2010 pandemic season, the US Centers for Disease Control and Prevention (CDC) recommended that initial priority for H1N1 vaccines be given to individuals under age 25, as these individuals are more likely to spread influenza than older adults. However, due to significant delay in vaccine delivery for the H1N1 influenza pandemic, a large fraction of population was exposed to the H1N1 virus and thereby obtained immunity prior to the wide availability of vaccines. This exposure affects the spread of the disease and needs to be considered when prioritizing vaccine distribution. To determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission and considered the impact of delayed vaccination. Our results indicate that if individuals decide to vaccinate according to self-interest, the resulting optimal vaccination strategy would prioritize adults of age 25 to 49 followed by either preschool-age children before the pandemic peak or older adults (age 50-64) at the pandemic peak. In contrast, the vaccine allocation strategy that is optimal for the population as a whole would prioritize individuals of ages 5 to 64 to curb a growing pandemic regardless of the timing of the vaccination program. Our results indicate that for a delayed vaccine distribution, the priorities that are optimal at a population level do not align with those that are optimal according to individual self-interest. Moreover, the discordance between the optimal vaccine distributions based on individual self-interest and those based on population interest is even more pronounced when vaccine availability is delayed. To determine optimal vaccine allocation for pandemic influenza, public health agencies need to consider both the changes in infection risks among age groups and expected patterns of adherence.
Epidemic spreading on random surfer networks with optimal interaction radius
NASA Astrophysics Data System (ADS)
Feng, Yun; Ding, Li; Hu, Ping
2018-03-01
In this paper, the optimal control problem of epidemic spreading on random surfer heterogeneous networks is considered. An epidemic spreading model is established according to the classification of individual's initial interaction radii. Then, a control strategy is proposed based on adjusting individual's interaction radii. The global stability of the disease free and endemic equilibrium of the model is investigated. We prove that an optimal solution exists for the optimal control problem and the explicit form of which is presented. Numerical simulations are conducted to verify the correctness of the theoretical results. It is proved that the optimal control strategy is effective to minimize the density of infected individuals and the cost associated with the adjustment of interaction radii.
Complexity Science Applications to Dynamic Trajectory Management: Research Strategies
NASA Technical Reports Server (NTRS)
Sawhill, Bruce; Herriot, James; Holmes, Bruce J.; Alexandrov, Natalia
2009-01-01
The promise of the Next Generation Air Transportation System (NextGen) is strongly tied to the concept of trajectory-based operations in the national airspace system. Existing efforts to develop trajectory management concepts are largely focused on individual trajectories, optimized independently, then de-conflicted among each other, and individually re-optimized, as possible. The benefits in capacity, fuel, and time are valuable, though perhaps could be greater through alternative strategies. The concept of agent-based trajectories offers a strategy for automation of simultaneous multiple trajectory management. The anticipated result of the strategy would be dynamic management of multiple trajectories with interacting and interdependent outcomes that satisfy multiple, conflicting constraints. These constraints would include the business case for operators, the capacity case for the Air Navigation Service Provider (ANSP), and the environmental case for noise and emissions. The benefits in capacity, fuel, and time might be improved over those possible under individual trajectory management approaches. The proposed approach relies on computational agent-based modeling (ABM), combinatorial mathematics, as well as application of "traffic physics" concepts to the challenge, and modeling and simulation capabilities. The proposed strategy could support transforming air traffic control from managing individual aircraft behaviors to managing systemic behavior of air traffic in the NAS. A system built on the approach could provide the ability to know when regions of airspace approach being "full," that is, having non-viable local solution space for optimizing trajectories in advance.
Seamline Determination Based on PKGC Segmentation for Remote Sensing Image Mosaicking
Dong, Qiang; Liu, Jinghong
2017-01-01
This paper presents a novel method of seamline determination for remote sensing image mosaicking. A two-level optimization strategy is applied to determine the seamline. Object-level optimization is executed firstly. Background regions (BRs) and obvious regions (ORs) are extracted based on the results of parametric kernel graph cuts (PKGC) segmentation. The global cost map which consists of color difference, a multi-scale morphological gradient (MSMG) constraint, and texture difference is weighted by BRs. Finally, the seamline is determined in the weighted cost from the start point to the end point. Dijkstra’s shortest path algorithm is adopted for pixel-level optimization to determine the positions of seamline. Meanwhile, a new seamline optimization strategy is proposed for image mosaicking with multi-image overlapping regions. The experimental results show the better performance than the conventional method based on mean-shift segmentation. Seamlines based on the proposed method bypass the obvious objects and take less time in execution. This new method is efficient and superior for seamline determination in remote sensing image mosaicking. PMID:28749446
Robust optimization of supersonic ORC nozzle guide vanes
NASA Astrophysics Data System (ADS)
Bufi, Elio A.; Cinnella, Paola
2017-03-01
An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Reentry trajectory optimization based on a multistage pseudospectral method.
Zhao, Jiang; Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.
Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method
Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Optimizing Multiple QoS for Workflow Applications using PSO and Min-Max Strategy
NASA Astrophysics Data System (ADS)
Umar Ambursa, Faruku; Latip, Rohaya; Abdullah, Azizol; Subramaniam, Shamala
2017-08-01
Workflow scheduling under multiple QoS constraints is a complicated optimization problem. Metaheuristic techniques are excellent approaches used in dealing with such problem. Many metaheuristic based algorithms have been proposed, that considers various economic and trustworthy QoS dimensions. However, most of these approaches lead to high violation of user-defined QoS requirements in tight situation. Recently, a new Particle Swarm Optimization (PSO)-based QoS-aware workflow scheduling strategy (LAPSO) is proposed to improve performance in such situations. LAPSO algorithm is designed based on synergy between a violation handling method and a hybrid of PSO and min-max heuristic. Simulation results showed a great potential of LAPSO algorithm to handling user requirements even in tight situations. In this paper, the performance of the algorithm is anlysed further. Specifically, the impact of the min-max strategy on the performance of the algorithm is revealed. This is achieved by removing the violation handling from the operation of the algorithm. The results show that LAPSO based on only the min-max method still outperforms the benchmark, even though the LAPSO with the violation handling performs more significantly better.
NASA Astrophysics Data System (ADS)
Yuan, Yongliang; Song, Xueguan; Sun, Wei; Wang, Xiaobang
2018-05-01
The dynamic performance of a belt drive system is composed of many factors, such as the efficiency, the vibration, and the optimal parameters. The conventional design only considers the basic performance of the belt drive system, while ignoring its overall performance. To address all these challenges, the study on vibration characteristics and optimization strategies could be a feasible way. This paper proposes a new optimization strategy and takes a belt drive design optimization as a case study based on the multidisciplinary design optimization (MDO). The MDO of the belt drive system is established and the corresponding sub-systems are analyzed. The multidisciplinary optimization is performed by using an improved genetic algorithm. Based on the optimal results obtained from the MDO, the three-dimension (3D) model of the belt drive system is established for dynamics simulation by virtual prototyping. From the comparison of the results with respect to different velocities and loads, the MDO method can effectively reduce the transverse vibration amplitude. The law of the vibration displacement, the vibration frequency, and the influence of velocities on the transverse vibrations has been obtained. Results show that the MDO method is of great help to obtain the optimal structural parameters. Furthermore, the kinematics principle of the belt drive has been obtained. The belt drive design case indicates that the proposed method in this paper can also be used to solve other engineering optimization problems efficiently.
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
Optimal placement of tuning masses for vibration reduction in helicopter rotor blades
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1988-01-01
Described are methods for reducing vibration in helicopter rotor blades by determining optimum sizes and locations of tuning masses through formal mathematical optimization techniques. An optimization procedure is developed which employs the tuning masses and corresponding locations as design variables which are systematically changed to achieve low values of shear without a large mass penalty. The finite-element structural analysis of the blade and the optimization formulation require development of discretized expressions for two performance parameters: modal shaping parameter and modal shear amplitude. Matrix expressions for both quantities and their sensitivity derivatives are developed. Three optimization strategies are developed and tested. The first is based on minimizing the modal shaping parameter which indirectly reduces the modal shear amplitudes corresponding to each harmonic of airload. The second strategy reduces these amplitudes directly, and the third strategy reduces the shear as a function of time during a revolution of the blade. The first strategy works well for reducing the shear for one mode responding to a single harmonic of the airload, but has been found in some cases to be ineffective for more than one mode. The second and third strategies give similar results and show excellent reduction of the shear with a low mass penalty.
H2/H∞ control for grid-feeding converter considering system uncertainty
NASA Astrophysics Data System (ADS)
Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang
2017-05-01
Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.
Soler, Maria; Estevez, M-Carmen; Alvarez, Mar; Otte, Marinus A; Sepulveda, Borja; Lechuga, Laura M
2014-01-29
Design of an optimal surface biofunctionalization still remains an important challenge for the application of biosensors in clinical practice and therapeutic follow-up. Optical biosensors offer real-time monitoring and highly sensitive label-free analysis, along with great potential to be transferred to portable devices. When applied in direct immunoassays, their analytical features depend strongly on the antibody immobilization strategy. A strategy for correct immobilization of antibodies based on the use of ProLinker™ has been evaluated and optimized in terms of sensitivity, selectivity, stability and reproducibility. Special effort has been focused on avoiding antibody manipulation, preventing nonspecific adsorption and obtaining a robust biosurface with regeneration capabilities. ProLinker™-based approach has demonstrated to fulfill those crucial requirements and, in combination with PEG-derivative compounds, has shown encouraging results for direct detection in biological fluids, such as pure urine or diluted serum. Furthermore, we have implemented the ProLinker™ strategy to a novel nanoplasmonic-based biosensor resulting in promising advantages for its application in clinical and biomedical diagnosis.
NASA Astrophysics Data System (ADS)
Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu
2017-05-01
Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.
Discriminative motif optimization based on perceptron training
Patel, Ronak Y.; Stormo, Gary D.
2014-01-01
Motivation: Generating accurate transcription factor (TF) binding site motifs from data generated using the next-generation sequencing, especially ChIP-seq, is challenging. The challenge arises because a typical experiment reports a large number of sequences bound by a TF, and the length of each sequence is relatively long. Most traditional motif finders are slow in handling such enormous amount of data. To overcome this limitation, tools have been developed that compromise accuracy with speed by using heuristic discrete search strategies or limited optimization of identified seed motifs. However, such strategies may not fully use the information in input sequences to generate motifs. Such motifs often form good seeds and can be further improved with appropriate scoring functions and rapid optimization. Results: We report a tool named discriminative motif optimizer (DiMO). DiMO takes a seed motif along with a positive and a negative database and improves the motif based on a discriminative strategy. We use area under receiver-operating characteristic curve (AUC) as a measure of discriminating power of motifs and a strategy based on perceptron training that maximizes AUC rapidly in a discriminative manner. Using DiMO, on a large test set of 87 TFs from human, drosophila and yeast, we show that it is possible to significantly improve motifs identified by nine motif finders. The motifs are generated/optimized using training sets and evaluated on test sets. The AUC is improved for almost 90% of the TFs on test sets and the magnitude of increase is up to 39%. Availability and implementation: DiMO is available at http://stormo.wustl.edu/DiMO Contact: rpatel@genetics.wustl.edu, ronakypatel@gmail.com PMID:24369152
Battery Storage Evaluation Tool, version 1.x
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-02
The battery storage evaluation tool developed at Pacific Northwest National Laboratory is used to run a one-year simulation to evaluate the benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a lookahead optimization is first formulated and solved to determine the battery base operating point. The minute-by-minute simulation is then performed to simulate the actual battery operation.
Optimizing Telehealth Strategies for Subspecialty Care: Recommendations from Rural Pediatricians
Demirci, Jill R.; Bogen, Debra L.; Mehrotra, Ateev; Miller, Elizabeth
2015-01-01
Abstract Background: Telehealth offers strategies to improve access to subspecialty care for children in rural communities. Rural pediatrician experiences and preferences regarding the use of these telehealth strategies for children's subspecialty care needs are not known. We elicited rural pediatrician experiences and preferences regarding different pediatric subspecialty telehealth strategies. Materials and Methods: Seventeen semistructured telephone interviews were conducted with rural pediatricians from 17 states within the United States. Interviewees were recruited by e-mails to a pediatric rural health listserv and to rural pediatricians identified through snowball sampling. Themes were identified through thematic analysis of interview transcripts. Institutional Review Board approval was obtained. Results: Rural pediatricians identified several telehealth strategies to improve access to subspecialty care, including physician access hotlines, remote electronic medical record access, electronic messaging systems, live video telemedicine, and telehealth triage systems. Rural pediatricians provided recommendations for optimizing the utility of each of these strategies based on their experiences with different systems. Rural pediatricians preferred specific telehealth strategies for specific clinical contexts, resulting in a proposed framework describing the complementary role of different telehealth strategies for pediatric subspecialty care. Finally, rural pediatricians identified additional benefits associated with the use of telehealth strategies and described a desire for telehealth systems that enhanced (rather than replaced) personal relationships between rural pediatricians and subspecialists. Conclusions: Rural pediatricians described complementary roles for different subspecialty care telehealth strategies. Additionally, rural pediatricians provided recommendations for optimizing individual telehealth strategies. Input from rural pediatricians will be crucial for optimizing specific telehealth strategies and designing effective telehealth systems. PMID:25919585
Sun, Deshun; Liu, Fei
2018-06-01
In this paper, a hepatitis B virus (HBV) model with an incubation period and delayed state and control variables is firstly proposed. Furthermore, the combination treatment is adopted to have a longer-lasting effect than mono-therapy. The equilibrium points and basic reproduction number are calculated, and then the local stability is analyzed on this model. We then present optimal control strategies based on the Pontryagin's minimum principle with an objective function not only to reduce the levels of exposed cells, infected cells and free viruses nearly to zero at the end of therapy, but also to minimize the drug side-effect and the cost of treatment. What's more, we develop a numerical simulation algorithm for solving our HBV model based on the combination of forward and backward difference approximations. The state dynamics of uninfected cells, exposed cells, infected cells, free viruses, CTL and ALT are simulated with or without optimal control, which show that HBV is reduced nearly to zero based on the time-varying optimal control strategies whereas the disease would break out without control. At last, by the simulations, we prove that strategy A is the best among the three kinds of strategies we adopt and further comparisons have been done between model (1) and model (2).
Heuristic-based information acquisition and decision making among pilots.
Wiggins, Mark W; Bollwerk, Sandra
2006-01-01
This research was designed to examine the impact of heuristic-based approaches to the acquisition of task-related information on the selection of an optimal alternative during simulated in-flight decision making. The work integrated features of naturalistic and normative decision making and strategies of information acquisition within a computer-based, decision support framework. The study comprised two phases, the first of which involved familiarizing pilots with three different heuristic-based strategies of information acquisition: frequency, elimination by aspects, and majority of confirming decisions. The second stage enabled participants to choose one of the three strategies of information acquisition to resolve a fourth (choice) scenario. The results indicated that task-oriented experience, rather than the information acquisition strategies, predicted the selection of the optimal alternative. It was also evident that of the three strategies available, the elimination by aspects information acquisition strategy was preferred by most participants. It was concluded that task-oriented experience, rather than the process of information acquisition, predicted task accuracy during the decision-making task. It was also concluded that pilots have a preference for one particular approach to information acquisition. Applications of outcomes of this research include the development of decision support systems that adapt to the information-processing capabilities and preferences of users.
Dynamics and control of DNA sequence amplification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marimuthu, Karthikeyan; Chakrabarti, Raj, E-mail: raj@pmc-group.com, E-mail: rajc@andrew.cmu.edu; Division of Fundamental Research, PMC Advanced Technology, Mount Laurel, New Jersey 08054
2014-10-28
DNA amplification is the process of replication of a specified DNA sequence in vitro through time-dependent manipulation of its external environment. A theoretical framework for determination of the optimal dynamic operating conditions of DNA amplification reactions, for any specified amplification objective, is presented based on first-principles biophysical modeling and control theory. Amplification of DNA is formulated as a problem in control theory with optimal solutions that can differ considerably from strategies typically used in practice. Using the Polymerase Chain Reaction as an example, sequence-dependent biophysical models for DNA amplification are cast as control systems, wherein the dynamics of the reactionmore » are controlled by a manipulated input variable. Using these control systems, we demonstrate that there exists an optimal temperature cycling strategy for geometric amplification of any DNA sequence and formulate optimal control problems that can be used to derive the optimal temperature profile. Strategies for the optimal synthesis of the DNA amplification control trajectory are proposed. Analogous methods can be used to formulate control problems for more advanced amplification objectives corresponding to the design of new types of DNA amplification reactions.« less
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-03-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
NASA Astrophysics Data System (ADS)
Gilani, Seyed-Omid; Sattarvand, Javad
2016-02-01
Meeting production targets in terms of ore quantity and quality is critical for a successful mining operation. In-situ grade uncertainty causes both deviations from production targets and general financial deficits. A new stochastic optimization algorithm based on ant colony optimization (ACO) approach is developed herein to integrate geological uncertainty described through a series of the simulated ore bodies. Two different strategies were developed based on a single predefined probability value (Prob) and multiple probability values (Pro bnt) , respectively in order to improve the initial solutions that created by deterministic ACO procedure. Application at the Sungun copper mine in the northwest of Iran demonstrate the abilities of the stochastic approach to create a single schedule and control the risk of deviating from production targets over time and also increase the project value. A comparison between two strategies and traditional approach illustrates that the multiple probability strategy is able to produce better schedules, however, the single predefined probability is more practical in projects requiring of high flexibility degree.
Nearly ideal binary communication in squeezed channels
NASA Astrophysics Data System (ADS)
Paris, Matteo G.
2001-07-01
We analyze the effect of squeezing the channel in binary communication based on Gaussian states. We show that for coding on pure states, squeezing increases the detection probability at fixed size of the strategy, actually saturating the optimal bound already for moderate signal energy. Using Neyman-Pearson lemma for fuzzy hypothesis testing we are able to analyze also the case of mixed states, and to find the optimal amount of squeezing that can be effectively employed. It results that optimally squeezed channels are robust against signal mixing, and largely improve the strategy power by comparison with coherent ones.
Optimizing nursing care by integrating theory-driven evidence-based practice.
Pipe, Teri Britt
2007-01-01
An emerging challenge for nursing leadership is how to convey the importance of both evidence-based practice (EBP) and theory-driven care in ensuring patient safety and optimizing outcomes. This article describes a specific example of a leadership strategy based on Rosswurm and Larrabee's model for change to EBP, which was effective in aligning the processes of EBP and theory-driven care.
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2010-12-01
A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.
Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa
2013-04-09
Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.
NASA Astrophysics Data System (ADS)
Ahmadi, Mohammad H.; Amin Nabakhteh, Mohammad; Ahmadi, Mohammad-Ali; Pourfayaz, Fathollah; Bidi, Mokhtar
2017-10-01
The motivation behind this work is to explore a nanoscale irreversible Stirling refrigerator with respect to size impacts and shows two novel thermo-ecological criteria. Two distinct strategies were suggested in the optimization process and the consequences of every strategy were examined independently. In the primary strategy, with the purpose of maximizing the energetic sustainability index and modified the ecological coefficient of performance (MECOP) and minimizing the dimensionless Ecological function, a multi-objective optimization algorithm (MOEA) was used. In the second strategy, with the purpose of maximizing the ECOP and MECOP and minimizing the dimensionless Ecological function, a MOEA was used. To conclude the final solution from each strategy, three proficient decision makers were utilized. Additionally, to quantify the deviation of the results gained from each decision makers, two different statistical error indexes were employed. Finally, based on the comparison between the results achieved from proposed scenarios reveals that by maximizing the MECOP the maximum values of ESI, ECOP, and a minimum of ecfare achieved.
Xu, Shi-Zhou; Wang, Chun-Jie; Lin, Fang-Li; Li, Shi-Xiang
2017-10-31
The multi-device open-circuit fault is a common fault of ANPC (Active Neutral-Point Clamped) three-level inverter and effect the operation stability of the whole system. To improve the operation stability, this paper summarized the main solutions currently firstly and analyzed all the possible states of multi-device open-circuit fault. Secondly, an order-reduction optimal control strategy was proposed under multi-device open-circuit fault to realize fault-tolerant control based on the topology and control requirement of ANPC three-level inverter and operation stability. This control strategy can solve the faults with different operation states, and can works in order-reduction state under specific open-circuit faults with specific combined devices, which sacrifices the control quality to obtain the stability priority control. Finally, the simulation and experiment proved the effectiveness of the proposed strategy.
Bertsimas, Dimitris; Silberholz, John; Trikalinos, Thomas
2018-03-01
Important decisions related to human health, such as screening strategies for cancer, need to be made without a satisfactory understanding of the underlying biological and other processes. Rather, they are often informed by mathematical models that approximate reality. Often multiple models have been made to study the same phenomenon, which may lead to conflicting decisions. It is natural to seek a decision making process that identifies decisions that all models find to be effective, and we propose such a framework in this work. We apply the framework in prostate cancer screening to identify prostate-specific antigen (PSA)-based strategies that perform well under all considered models. We use heuristic search to identify strategies that trade off between optimizing the average across all models' assessments and being "conservative" by optimizing the most pessimistic model assessment. We identified three recently published mathematical models that can estimate quality-adjusted life expectancy (QALE) of PSA-based screening strategies and identified 64 strategies that trade off between maximizing the average and the most pessimistic model assessments. All prescribe PSA thresholds that increase with age, and 57 involve biennial screening. Strategies with higher assessments with the pessimistic model start screening later, stop screening earlier, and use higher PSA thresholds at earlier ages. The 64 strategies outperform 22 previously published expert-generated strategies. The 41 most "conservative" ones remained better than no screening with all models in extensive sensitivity analyses. We augment current comparative modeling approaches by identifying strategies that perform well under all models, for various degrees of decision makers' conservativeness.
Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M; Chattopadhyay, Joydev; Sarkar, Ram Rup
2017-01-01
Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.
A new strategy for array optimization applied to Brazilian Decimetric Array
NASA Astrophysics Data System (ADS)
Faria, C.; Stephany, S.; Sawant, H. S.
Radio interferometric arrays measure the Fourier transform of the sky brightness distribution in a finite set of points that are determined by the cross-correlation of different pairs of antennas of the array The sky brightness distribution is reconstructed by the inverse Fourier transform of the sampled visibilities The quality of the reconstructed images strongly depends on the array configuration since it determines the sampling function and therefore the points in the Fourier Plane This work proposes a new optimization strategy for the array configuration that is based on the entropy of the distribution of the samples points in the Fourier plane A stochastic optimizer the Ant Colony Optimization employs entropy of the point distribution in the Fourier plane to iteratively refine the candidate solutions The proposed strategy was developed for the Brazilian Decimetric Array BDA a radio interferometric array that is currently being developed for solar observations at the Brazilian Institute for Space Research Configurations results corresponding to the Fourier plane coverage synthesized beam and side lobes levels are shown for an optimized BDA configuration obtained with the proposed strategy and compared to the results for a standard T array configuration that was originally proposed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fortes, Raphael; Rigolin, Gustavo, E-mail: rigolin@ifi.unicamp.br
We push the limits of the direct use of partially pure entangled states to perform quantum teleportation by presenting several protocols in many different scenarios that achieve the optimal efficiency possible. We review and put in a single formalism the three major strategies known to date that allow one to use partially entangled states for direct quantum teleportation (no distillation strategies permitted) and compare their efficiencies in real world implementations. We show how one can improve the efficiency of many direct teleportation protocols by combining these techniques. We then develop new teleportation protocols employing multipartite partially entangled states. The threemore » techniques are also used here in order to achieve the highest efficiency possible. Finally, we prove the upper bound for the optimal success rate for protocols based on partially entangled Bell states and show that some of the protocols here developed achieve such a bound. -- Highlights: •Optimal direct teleportation protocols using directly partially entangled states. •We put in a single formalism all strategies of direct teleportation. •We extend these techniques for multipartite partially entangle states. •We give upper bounds for the optimal efficiency of these protocols.« less
Ding, Xu; Han, Jianghong; Shi, Lei
2015-01-01
In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating “bottleneck” sensor nodes is also developed in this paper. PMID:25785305
Ding, Xu; Han, Jianghong; Shi, Lei
2015-03-16
In this paper, the optimal working schemes for wireless sensor networks with multiple base stations and wireless energy transfer devices are proposed. The wireless energy transfer devices also work as data gatherers while charging sensor nodes. The wireless sensor network is firstly divided into sub networks according to the concept of Voronoi diagram. Then, the entire energy replenishing procedure is split into the pre-normal and normal energy replenishing stages. With the objective of maximizing the sojourn time ratio of the wireless energy transfer device, a continuous time optimization problem for the normal energy replenishing cycle is formed according to constraints with which sensor nodes and wireless energy transfer devices should comply. Later on, the continuous time optimization problem is reshaped into a discrete multi-phased optimization problem, which yields the identical optimality. After linearizing it, we obtain a linear programming problem that can be solved efficiently. The working strategies of both sensor nodes and wireless energy transfer devices in the pre-normal replenishing stage are also discussed in this paper. The intensive simulations exhibit the dynamic and cyclic working schemes for the entire energy replenishing procedure. Additionally, a way of eliminating "bottleneck" sensor nodes is also developed in this paper.
Optimization of robustness of interdependent network controllability by redundant design
2018-01-01
Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guthier, C; University Medical Center Mannheim, Mannheim; Harvard Medical School, Boston, MA
Purpose: Inverse treatment planning (ITP) for interstitial HDR brachytherapy of gynecologic cancers seeks to maximize coverage of the clinical target volumes (tumor and vagina) while respecting dose-volume-histogram related dosimetric measures (DMs) for organs at risk (OARs). Commercially available ITP tools do not support DM-based planning because it is computationally too expensive to solve. In this study we present a novel approach that allows fast ITP for gynecologic cancers based on DMs for the first time. Methods: This novel strategy is an optimization model based on a smooth DM-based objective function. The smooth approximation is achieved by utilizing a logistic functionmore » for the evaluation of DMs. The resulting nonconvex and constrained optimization problem is then optimized with a BFGS algorithm. The model was evaluated using the implant geometry extracted from 20 patient treatment plans under an IRB-approved retrospective study. For each plan, the final DMs were evaluated and compared to the original clinical plans. The CTVs were the contoured tumor volume and the contoured surface of the vagina. Statistical significance was evaluated with a one-sided paired Wilcoxon signed-rank test. Results: As did the clinical plans, all generated plans fulfilled the defined DMs for OARs. The proposed strategy showed a statistically significant improvement (p<0.001) in coverage of the tumor and vagina, with absolute improvements of related DMs of (6.9 +/− 7.9)% and (28.2 +/− 12.0)%, respectively. This was achieved with a statistically significant (p<0.01) decrease of the high-dose-related DM for the tumor. The runtime of the optimization was (2.3 +/− 2.0) seconds. Conclusion: We demonstrated using clinical data that our novel approach allows rapid DM-based optimization with improved coverage of CTVs with fewer hot spots. Being up to three orders of magnitude faster than the current clinical practice, the method dramatically shortens planning time.« less
Optimizing a desirable fare structure for a bus-subway corridor
Liu, Bing-Zheng; Ge, Ying-En; Cao, Kai; Jiang, Xi; Meng, Lingyun; Liu, Ding; Gao, Yunfeng
2017-01-01
This paper aims to optimize a desirable fare structure for the public transit service along a bus-subway corridor with the consideration of those factors related to equity in trip, including travel distance and comfort level. The travel distance factor is represented by the distance-based fare strategy, which is an existing differential strategy. The comfort level one is considered in the area-based fare strategy which is a new differential strategy defined in this paper. Both factors are referred to by the combined fare strategy which is composed of distance-based and area-based fare strategies. The flat fare strategy is applied to determine a reference level of social welfare and obtain the general passenger flow along transit lines, which is used to divide areas or zones along the corridor. This problem is formulated as a bi-level program, of which the upper level maximizes the social welfare and the lower level capturing traveler choice behavior is a variable-demand stochastic user equilibrium assignment model. A genetic algorithm is applied to solve the bi-level program while the method of successive averages is adopted to solve the lower-level model. A series of numerical experiments are carried out to illustrate the performance of the models and solution methods. Numerical results indicate that all three differential fare strategies play a better role in enhancing the social welfare than the flat fare strategy and that the fare structure under the combined fare strategy generates the highest social welfare and the largest resulting passenger demand, which implies that the more equity factors a differential fare strategy involves the more desirable fare structure the strategy has. PMID:28981508
Optimizing a desirable fare structure for a bus-subway corridor.
Liu, Bing-Zheng; Ge, Ying-En; Cao, Kai; Jiang, Xi; Meng, Lingyun; Liu, Ding; Gao, Yunfeng
2017-01-01
This paper aims to optimize a desirable fare structure for the public transit service along a bus-subway corridor with the consideration of those factors related to equity in trip, including travel distance and comfort level. The travel distance factor is represented by the distance-based fare strategy, which is an existing differential strategy. The comfort level one is considered in the area-based fare strategy which is a new differential strategy defined in this paper. Both factors are referred to by the combined fare strategy which is composed of distance-based and area-based fare strategies. The flat fare strategy is applied to determine a reference level of social welfare and obtain the general passenger flow along transit lines, which is used to divide areas or zones along the corridor. This problem is formulated as a bi-level program, of which the upper level maximizes the social welfare and the lower level capturing traveler choice behavior is a variable-demand stochastic user equilibrium assignment model. A genetic algorithm is applied to solve the bi-level program while the method of successive averages is adopted to solve the lower-level model. A series of numerical experiments are carried out to illustrate the performance of the models and solution methods. Numerical results indicate that all three differential fare strategies play a better role in enhancing the social welfare than the flat fare strategy and that the fare structure under the combined fare strategy generates the highest social welfare and the largest resulting passenger demand, which implies that the more equity factors a differential fare strategy involves the more desirable fare structure the strategy has.
Integrated testing strategies can be optimal for chemical risk classification.
Raseta, Marko; Pitchford, Jon; Cussens, James; Doe, John
2017-08-01
There is an urgent need to refine strategies for testing the safety of chemical compounds. This need arises both from the financial and ethical costs of animal tests, but also from the opportunities presented by new in-vitro and in-silico alternatives. Here we explore the mathematical theory underpinning the formulation of optimal testing strategies in toxicology. We show how the costs and imprecisions of the various tests, and the variability in exposures and responses of individuals, can be assembled rationally to form a Markov Decision Problem. We compute the corresponding optimal policies using well developed theory based on Dynamic Programming, thereby identifying and overcoming some methodological and logical inconsistencies which may exist in the current toxicological testing. By illustrating our methods for two simple but readily generalisable examples we show how so-called integrated testing strategies, where information of different precisions from different sources is combined and where different initial test outcomes lead to different sets of future tests, can arise naturally as optimal policies. Copyright © 2017 Elsevier Inc. All rights reserved.
THE CHOICE OF REAL-TIME CONTROL STRATEGY FOR COMBINED SEWER OVERFLOW CONTROL
This paper focuses on the strategies used to operate a collection system in real-time control (RTC) in order to optimize use of system capacity and to reduce the cost of long-term combined sewer overflow (CSO) control. Three RTC strategies were developed and analyzed based on the...
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Ant Navigation: Fractional Use of the Home Vector
Cheung, Allen; Hiby, Lex; Narendra, Ajay
2012-01-01
Home is a special location for many animals, offering shelter from the elements, protection from predation, and a common place for gathering of the same species. Not surprisingly, many species have evolved efficient, robust homing strategies, which are used as part of each and every foraging journey. A basic strategy used by most animals is to take the shortest possible route home by accruing the net distances and directions travelled during foraging, a strategy well known as path integration. This strategy is part of the navigation toolbox of ants occupying different landscapes. However, when there is a visual discrepancy between test and training conditions, the distance travelled by animals relying on the path integrator varies dramatically between species: from 90% of the home vector to an absolute distance of only 50 cm. We here ask what the theoretically optimal balance between PI-driven and landmark-driven navigation should be. In combination with well-established results from optimal search theory, we show analytically that this fractional use of the home vector is an optimal homing strategy under a variety of circumstances. Assuming there is a familiar route that an ant recognizes, theoretically optimal search should always begin at some fraction of the home vector, depending on the region of familiarity. These results are shown to be largely independent of the search algorithm used. Ant species from different habitats appear to have optimized their navigation strategy based on the availability and nature of navigational information content in their environment. PMID:23209744
Suárez Riveiro, José Manuel
2014-01-01
In addition to cognitive and behavioral strategies, students can also use affective-motivational strategies to facilitate their learning process. In this way, the strategies of defensive-pessimism and generation of positive expectations have been widely related to conceptual models of pessimism-optimism. The aim of this study was to describe the use of these strategies in 1753 secondary school students, and to study the motivational and strategic characteristics which differentiated between the student typologies identified as a result of their use. The results indicated a higher use of the generation of positive expectations strategy (optimism) (M = 3.40, SD = .78) than the use of the defensive pessimism strategy (M = 3.00, SD = .78); a positive and significant correlation between the two strategies (r = .372, p = .001); their relationship with adequate academic motivation and with the use of learning strategies. Furthermore, four student typologies were identified based on the use of both strategies. Lastly, we propose a new approach for future work in this line of research.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.
Patri, Jean-François; Diard, Julien; Perrier, Pascal
2015-12-01
The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.
An optimal routing strategy on scale-free networks
NASA Astrophysics Data System (ADS)
Yang, Yibo; Zhao, Honglin; Ma, Jinlong; Qi, Zhaohui; Zhao, Yongbin
Traffic is one of the most fundamental dynamical processes in networked systems. With the traditional shortest path routing (SPR) protocol, traffic congestion is likely to occur on the hub nodes on scale-free networks. In this paper, we propose an improved optimal routing (IOR) strategy which is based on the betweenness centrality and the degree centrality of nodes in the scale-free networks. With the proposed strategy, the routing paths can accurately bypass hub nodes in the network to enhance the transport efficiency. Simulation results show that the traffic capacity as well as some other indexes reflecting transportation efficiency are further improved with the IOR strategy. Owing to the significantly improved traffic performance, this study is helpful to design more efficient routing strategies in communication or transportation systems.
NASA Astrophysics Data System (ADS)
Ahmadi, Mohammad H.; Ahmadi, Mohammad-Ali; Pourfayaz, Fathollah
2015-09-01
Developing new technologies like nano-technology improves the performance of the energy industries. Consequently, emerging new groups of thermal cycles in nano-scale can revolutionize the energy systems' future. This paper presents a thermo-dynamical study of a nano-scale irreversible Stirling engine cycle with the aim of optimizing the performance of the Stirling engine cycle. In the Stirling engine cycle the working fluid is an Ideal Maxwell-Boltzmann gas. Moreover, two different strategies are proposed for a multi-objective optimization issue, and the outcomes of each strategy are evaluated separately. The first strategy is proposed to maximize the ecological coefficient of performance (ECOP), the dimensionless ecological function (ecf) and the dimensionless thermo-economic objective function ( F . Furthermore, the second strategy is suggested to maximize the thermal efficiency ( η), the dimensionless ecological function (ecf) and the dimensionless thermo-economic objective function ( F). All the strategies in the present work are executed via a multi-objective evolutionary algorithms based on NSGA∥ method. Finally, to achieve the final answer in each strategy, three well-known decision makers are executed. Lastly, deviations of the outcomes gained in each strategy and each decision maker are evaluated separately.
Anderson, D.R.
1974-01-01
Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.
Model-Based Battery Management Systems: From Theory to Practice
NASA Astrophysics Data System (ADS)
Pathak, Manan
Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.
Lin, Mai; Ranganathan, David; Mori, Tetsuya; Hagooly, Aviv; Rossin, Raffaella; Welch, Michael J; Lapi, Suzanne E
2012-10-01
Interest in using (68)Ga is rapidly increasing for clinical PET applications due to its favorable imaging characteristics and increased accessibility. The focus of this study was to provide our long-term evaluations of the two TiO(2)-based (68)Ge/(68)Ga generators and develop an optimized automation strategy to synthesize [(68)Ga]DOTATOC by using HEPES as a buffer system. This data will be useful in standardizing the evaluation of (68)Ge/(68)Ga generators and automation strategies to comply with regulatory issues for clinical use. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Daming; Al-Durra, Ahmed; Gao, Fei; Ravey, Alexandre; Matraji, Imad; Godoy Simões, Marcelo
2017-10-01
Energy management strategy plays a key role for Fuel Cell Hybrid Electric Vehicles (FCHEVs), it directly affects the efficiency and performance of energy storages in FCHEVs. For example, by using a suitable energy distribution controller, the fuel cell system can be maintained in a high efficiency region and thus saving hydrogen consumption. In this paper, an energy management strategy for online driving cycles is proposed based on a combination of the parameters from three offline optimized fuzzy logic controllers using data fusion approach. The fuzzy logic controllers are respectively optimized for three typical driving scenarios: highway, suburban and city in offline. To classify patterns of online driving cycles, a Probabilistic Support Vector Machine (PSVM) is used to provide probabilistic classification results. Based on the classification results of the online driving cycle, the parameters of each offline optimized fuzzy logic controllers are then fused using Dempster-Shafer (DS) evidence theory, in order to calculate the final parameters for the online fuzzy logic controller. Three experimental validations using Hardware-In-the-Loop (HIL) platform with different-sized FCHEVs have been performed. Experimental comparison results show that, the proposed PSVM-DS based online controller can achieve a relatively stable operation and a higher efficiency of fuel cell system in real driving cycles.
NASA Astrophysics Data System (ADS)
Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi
Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.
Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun
2017-09-01
The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.
Product modular design incorporating preventive maintenance issues
NASA Astrophysics Data System (ADS)
Gao, Yicong; Feng, Yixiong; Tan, Jianrong
2016-03-01
Traditional modular design methods lead to product maintenance problems, because the module form of a system is created according to either the function requirements or the manufacturing considerations. For solving these problems, a new modular design method is proposed with the considerations of not only the traditional function related attributes, but also the maintenance related ones. First, modularity parameters and modularity scenarios for product modularity are defined. Then the reliability and economic assessment models of product modularity strategies are formulated with the introduction of the effective working age of modules. A mathematical model used to evaluate the difference among the modules of the product so that the optimal module of the product can be established. After that, a multi-objective optimization problem based on metrics for preventive maintenance interval different degrees and preventive maintenance economics is formulated for modular optimization. Multi-objective GA is utilized to rapidly approximate the Pareto set of optimal modularity strategy trade-offs between preventive maintenance cost and preventive maintenance interval difference degree. Finally, a coordinate CNC boring machine is adopted to depict the process of product modularity. In addition, two factorial design experiments based on the modularity parameters are constructed and analyzed. These experiments investigate the impacts of these parameters on the optimal modularity strategies and the structure of module. The research proposes a new modular design method, which may help to improve the maintainability of product in modular design.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M. A.
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods. PMID:24883374
Daud, Muhamad Zalani; Mohamed, Azah; Hannan, M A
2014-01-01
This paper presents an evaluation of an optimal DC bus voltage regulation strategy for grid-connected photovoltaic (PV) system with battery energy storage (BES). The BES is connected to the PV system DC bus using a DC/DC buck-boost converter. The converter facilitates the BES power charge/discharge to compensate for the DC bus voltage deviation during severe disturbance conditions. In this way, the regulation of DC bus voltage of the PV/BES system can be enhanced as compared to the conventional regulation that is solely based on the voltage-sourced converter (VSC). For the grid side VSC (G-VSC), two control methods, namely, the voltage-mode and current-mode controls, are applied. For control parameter optimization, the simplex optimization technique is applied for the G-VSC voltage- and current-mode controls, including the BES DC/DC buck-boost converter controllers. A new set of optimized parameters are obtained for each of the power converters for comparison purposes. The PSCAD/EMTDC-based simulation case studies are presented to evaluate the performance of the proposed optimized control scheme in comparison to the conventional methods.
NASA Astrophysics Data System (ADS)
Helbing, Dirk; Schönhof, Martin; Kern, Daniel
2002-06-01
The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
Sun, Yahui; Liao, Qiang; Huang, Yun; Xia, Ao; Fu, Qian; Zhu, Xun; Fu, Jingwei; Li, Jun
2018-05-01
Considering the variations of optimal light intensity required by microalgae cells along with growth phases, growth-phase light-feeding strategies were proposed and verified in this paper, aiming at boosting microalgae lipid productivity from the perspective of light conditions optimization. Experimental results demonstrate that under an identical time-averaged light intensity, the light-feeding strategies characterized by stepwise incremental light intensities showed a positive effect on biomass and lipid accumulation. The lipid productivity (235.49 mg L -1 d -1 ) attained under light-feeding strategy V (time-averaged light intensity: 225 μmol m -2 s -1 ) was 52.38% higher over that obtained under a constant light intensity of 225 μmol m -2 s -1 . Subsequently, based on light-feeding strategy V, microalgae lipid productivity was further elevated to 312.92 mg L -1 d -1 employing a two-stage based light-feeding strategy V 560 (time-averaged light intensity: 360 μmol m -2 s -1 ), which was 79.63% higher relative to that achieved under a constant light intensity of 360 μmol m -2 s -1 . Copyright © 2018 Elsevier Ltd. All rights reserved.
Soler, Maria; Estevez, M.-Carmen; Alvarez, Mar; Otte, Marinus A.; Sepulveda, Borja; Lechuga, Laura M.
2014-01-01
Design of an optimal surface biofunctionalization still remains an important challenge for the application of biosensors in clinical practice and therapeutic follow-up. Optical biosensors offer real-time monitoring and highly sensitive label-free analysis, along with great potential to be transferred to portable devices. When applied in direct immunoassays, their analytical features depend strongly on the antibody immobilization strategy. A strategy for correct immobilization of antibodies based on the use of ProLinker™ has been evaluated and optimized in terms of sensitivity, selectivity, stability and reproducibility. Special effort has been focused on avoiding antibody manipulation, preventing nonspecific adsorption and obtaining a robust biosurface with regeneration capabilities. ProLinker™-based approach has demonstrated to fulfill those crucial requirements and, in combination with PEG-derivative compounds, has shown encouraging results for direct detection in biological fluids, such as pure urine or diluted serum. Furthermore, we have implemented the ProLinker™ strategy to a novel nanoplasmonic-based biosensor resulting in promising advantages for its application in clinical and biomedical diagnosis. PMID:24481229
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias
A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less
Optimization Strategies for Hardware-Based Cofactorization
NASA Astrophysics Data System (ADS)
Loebenberger, Daniel; Putzka, Jens
We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.
Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan
2016-01-01
We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.
Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)
NASA Astrophysics Data System (ADS)
Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas
2017-06-01
In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.
An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.
Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang
2017-11-01
This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
Zhan, Tiannan; Ali, Ayman; Choi, Jin G; Lee, Minyi; Leung, John; Dellon, Evan S; Garber, John J; Hur, Chin
2018-05-03
Elimination diets are effective treatments for eosinophilic esophagitis (EoE), but foods that activate esophagitis are identified empirically, via a process that involves multiple esophagogastroduodenoscopies (EGDs). No optimized approach has been developed to identify foods that activate EoE. We aimed to compare clinical strategies to provide data to guide treatment. We developed a computer-based simulation model to determine the optimal empiric elimination strategy based on reported prevalence values for foods that activate EoE. These were identified in a systematic review, searching PubMed through October 1, 2017 for prospective and retrospective studies of EoE and diet. Each patient in our virtual cohort was assigned profile comprising as many as 12 foods known to induce EoE, including dairy, wheat, eggs, soy, nuts, seafood, beef, corn, chicken, potato, pork, and/or rice. To balance the strategy success rate with the number of EGDs required for food identification, we applied an efficiency frontier approach. Strategies on the frontier were the most efficient, requiring fewer EGDs for higher or equivalent success rates relative to their comparable, neighboring strategies. In all simulations, we found the 1,4,8-food and 1,3-food strategies to be the most efficient in identifying foods that induce EoE, resulting in the highest rate of the correct identification of food triggers balanced by the number of EGDs required to complete the food elimination strategy. Both strategies begin with elimination of dairy; if EoE remission is not achieved, the 1,3 diet proceeds to eliminate wheat and eggs in addition to dairy, and the 1,4,8 strategy removes wheat, eggs, dairy, and soy. In the case of persistent EoE after the second round of food elimination, the 1,3-food strategy terminates, whereas the 1,4,8-food diet eliminates corn, chicken, beef, and pork. The 1,4,8-food resulted in correct identification of foods that activated esophagitis in 76.68% of patients, with a mean 4.13 EGDs and a median 6 EGDs. The 1,3-food strategy identified foods that activated esophagitis in 42.76% of patients, with a mean of 3.36 EGDs and a median 2 EGDs required. In a modeling analysis, we found the 1,4,8-food and 1,3-food elimination strategies to be the most efficient in detection of foods that induce EoE in patients, the 1,4,8-food strategy was optimal, requiring a mean of only 4.13 EGDs for food identification. However, the ideal elimination strategy will vary based on clinical priorities. Additional research on specific foods that induce EoE are needed to confirm the predictions of this model. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Using Cotton Model Simulations to Estimate Optimally Profitable Irrigation Strategies
NASA Astrophysics Data System (ADS)
Mauget, S. A.; Leiker, G.; Sapkota, P.; Johnson, J.; Maas, S.
2011-12-01
In recent decades irrigation pumping from the Ogallala Aquifer has led to declines in saturated thickness that have not been compensated for by natural recharge, which has led to questions about the long-term viability of agriculture in the cotton producing areas of west Texas. Adopting irrigation management strategies that optimize profitability while reducing irrigation waste is one way of conserving the aquifer's water resource. Here, a database of modeled cotton yields generated under drip and center pivot irrigated and dryland production scenarios is used in a stochastic dominance analysis that identifies such strategies under varying commodity price and pumping cost conditions. This database and analysis approach will serve as the foundation for a web-based decision support tool that will help producers identify optimal irrigation treatments under specified cotton price, electricity cost, and depth to water table conditions.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Lightweight design of automobile frame based on magnesium alloy
NASA Astrophysics Data System (ADS)
Lyu, R.; Jiang, X.; Minoru, O.; Ju, D. Y.
2018-06-01
The structural performance and lightweighting of car base frame design is a challenging task due to all the performance targets that must be satisfied. In this paper, three kinds of materials (iron, aluminum and magnesium alloy) replacement along with section design optimization strategy is proposed to develop a lightweight car frame structure to satisfy the tensile and safety while reducing weight. Two kinds of cross-sections are considered as the design variables. Using Ansys static structure, the design optimization problem is solved, comparing the results of each step, structure of the base flame is optimized for lightweight.
Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.
Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin
2012-06-10
The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.
Discrete homotopy analysis for optimal trading execution with nonlinear transient market impact
NASA Astrophysics Data System (ADS)
Curato, Gianbiagio; Gatheral, Jim; Lillo, Fabrizio
2016-10-01
Optimal execution in financial markets is the problem of how to trade a large quantity of shares incrementally in time in order to minimize the expected cost. In this paper, we study the problem of the optimal execution in the presence of nonlinear transient market impact. Mathematically such problem is equivalent to solve a strongly nonlinear integral equation, which in our model is a weakly singular Urysohn equation of the first kind. We propose an approach based on Homotopy Analysis Method (HAM), whereby a well behaved initial trading strategy is continuously deformed to lower the expected execution cost. Specifically, we propose a discrete version of the HAM, i.e. the DHAM approach, in order to use the method when the integrals to compute have no closed form solution. We find that the optimal solution is front loaded for concave instantaneous impact even when the investor is risk neutral. More important we find that the expected cost of the DHAM strategy is significantly smaller than the cost of conventional strategies.
NASA Astrophysics Data System (ADS)
Xu, Chuanpei; Niu, Junhao; Ling, Jing; Wang, Suyan
2018-03-01
In this paper, we present a parallel test strategy for bandwidth division multiplexing under the test access mechanism bandwidth constraint. The Pareto solution set is combined with a cloud evolutionary algorithm to optimize the test time and power consumption of a three-dimensional network-on-chip (3D NoC). In the proposed method, all individuals in the population are sorted in non-dominated order and allocated to the corresponding level. Individuals with extreme and similar characteristics are then removed. To increase the diversity of the population and prevent the algorithm from becoming stuck around local optima, a competition strategy is designed for the individuals. Finally, we adopt an elite reservation strategy and update the individuals according to the cloud model. Experimental results show that the proposed algorithm converges to the optimal Pareto solution set rapidly and accurately. This not only obtains the shortest test time, but also optimizes the power consumption of the 3D NoC.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Tolson, Bryan; Shawn Matott, L.
2015-04-01
GLUE is one of the most commonly used informal methodologies for uncertainty estimation in hydrological modelling. Despite the ease-of-use of GLUE, it involves a number of subjective decisions such as the strategy for identifying the behavioural solutions. This study evaluates the impact of behavioural solution identification strategies in GLUE on the quality of model output uncertainty. Moreover, two new strategies are developed to objectively identify behavioural solutions. The first strategy considers Pareto-based ranking of parameter sets, while the second one is based on ranking the parameter sets based on an aggregated criterion. The proposed strategies, as well as the traditional strategies in the literature, are evaluated with respect to reliability (coverage of observations by the envelope of model outcomes) and sharpness (width of the envelope of model outcomes) in different numerical experiments. These experiments include multi-criteria calibration and uncertainty estimation of three rainfall-runoff models with different number of parameters. To demonstrate the importance of behavioural solution identification strategy more appropriately, GLUE is also compared with two other informal multi-criteria calibration and uncertainty estimation methods (Pareto optimization and DDS-AU). The results show that the model output uncertainty varies with the behavioural solution identification strategy, and furthermore, a robust GLUE implementation would require considering multiple behavioural solution identification strategies and choosing the one that generates the desired balance between sharpness and reliability. The proposed objective strategies prove to be the best options in most of the case studies investigated in this research. Implementing such an approach for a high-dimensional calibration problem enables GLUE to generate robust results in comparison with Pareto optimization and DDS-AU.
NASA Astrophysics Data System (ADS)
Borhan, Hoseinali
Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).
De Groote, Friedl; Jonkers, Ilse; Duysens, Jacques
2014-01-01
Finding muscle activity generating a given motion is a redundant problem, since there are many more muscles than degrees of freedom. The control strategies determining muscle recruitment from a redundant set are still poorly understood. One theory of motor control suggests that motion is produced through activating a small number of muscle synergies, i.e., muscle groups that are activated in a fixed ratio by a single input signal. Because of the reduced number of input signals, synergy-based control is low dimensional. But a major criticism on the theory of synergy-based control of muscles is that muscle synergies might reflect task constraints rather than a neural control strategy. Another theory of motor control suggests that muscles are recruited by optimizing performance. Optimization of performance has been widely used to calculate muscle recruitment underlying a given motion while assuming independent recruitment of muscles. If synergies indeed determine muscle recruitment underlying a given motion, optimization approaches that do not model synergy-based control could result in muscle activations that do not show the synergistic muscle action observed through electromyography (EMG). If, however, synergistic muscle action results from performance optimization and task constraints (joint kinematics and external forces), such optimization approaches are expected to result in low-dimensional synergistic muscle activations that are similar to EMG-based synergies. We calculated muscle recruitment underlying experimentally measured gait patterns by optimizing performance assuming independent recruitment of muscles. We found that the muscle activations calculated without any reference to synergies can be accurately explained by on average four synergies. These synergies are similar to EMG-based synergies. We therefore conclude that task constraints and performance optimization explain synergistic muscle recruitment from a redundant set of muscles.
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Saile, Lynn; Freire de Carvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Lopez, Vilma
2011-01-01
Introduction The Integrated Medical Model (IMM) is a decision support tool that is useful to space flight mission managers and medical system designers in assessing risks and optimizing medical systems. The IMM employs an evidence-based, probabilistic risk assessment (PRA) approach within the operational constraints of space flight. Methods Stochastic computational methods are used to forecast probability distributions of medical events, crew health metrics, medical resource utilization, and probability estimates of medical evacuation and loss of crew life. The IMM can also optimize medical kits within the constraints of mass and volume for specified missions. The IMM was used to forecast medical evacuation and loss of crew life probabilities, as well as crew health metrics for a near-earth asteroid (NEA) mission. An optimized medical kit for this mission was proposed based on the IMM simulation. Discussion The IMM can provide information to the space program regarding medical risks, including crew medical impairment, medical evacuation and loss of crew life. This information is valuable to mission managers and the space medicine community in assessing risk and developing mitigation strategies. Exploration missions such as NEA missions will have significant mass and volume constraints applied to the medical system. Appropriate allocation of medical resources will be critical to mission success. The IMM capability of optimizing medical systems based on specific crew and mission profiles will be advantageous to medical system designers. Conclusion The IMM is a decision support tool that can provide estimates of the impact of medical events on human space flight missions, such as crew impairment, evacuation, and loss of crew life. It can be used to support the development of mitigation strategies and to propose optimized medical systems for specified space flight missions. Learning Objectives The audience will learn how an evidence-based decision support tool can be used to help assess risk, develop mitigation strategies, and optimize medical systems for exploration space flight missions.
NASA Astrophysics Data System (ADS)
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
Chatterjee, Arnab K; Yeung, Bryan KS
2012-01-01
Antimalarial drug discovery has historically benefited from the whole-cell (phenotypic) screening approach to identify lead molecules in the search for new drugs. However over the past two decades there has been a shift in the pharmaceutical industry to move away from whole-cell screening to target-based approaches. As part of a Wellcome Trust and Medicines for Malaria Venture (MMV) funded consortium to discover new blood-stage antimalarials, we used both approaches to identify new antimalarial chemotypes, two of which have progressed beyond the lead optimization phase and display excellent in vivo efficacy in mice. These two advanced series were identified through a cell-based optimization devoid of target information and in this review we summarize the advantages of this approach versus a target-based optimization. Although the each lead optimization required slightly different medicinal chemistry strategies, we observed some common issues across the different the scaffolds which could be applied to other cell based lead optimization programs. PMID:22242845
Zhou, Hui Jun; Dan, Yock Young; Naidoo, Nasheen; Li, Shu Chuen; Yeoh, Khay Guan
2013-01-01
Gastric cancer (GC) surveillance based on oesophagogastroduodenoscopy (OGD) appears to be a promising strategy for GC prevention. By evaluating the cost-effectiveness of endoscopic surveillance in Singaporean Chinese, this study aimed to inform the implementation of such a program in a population with a low to intermediate GC risk. USING A REFERENCE STRATEGY OF NO OGD INTERVENTION, WE EVALUATED FOUR STRATEGIES: 2-yearly OGD surveillance, annual OGD surveillance, 2-yearly OGD screening and 2-yearly screening plus annual surveillance in Singaporean Chinese aged 50-69 years. From a perspective of the healthcare system, Markov models were built to simulate the life experience of the target population. The models projected discounted lifetime costs ($), quality adjusted life year (QALY), and incremental cost-effectiveness ratio (ICER) indicating the cost-effectiveness of each strategy against a Singapore willingness-to-pay of $46,200/QALY. Deterministic and probabilistic sensitivity analyses were used to identify the influential variables and their associated thresholds, and to quantify the influence of parameter uncertainties respectively. With an ICER of $44,098/QALY, the annual OGD surveillance was the optimal strategy while the 2-yearly surveillance was the most cost-effective strategy (ICER = $25,949/QALY). The screening-based strategies were either extendedly dominated or cost-ineffective. The cost-effectiveness heterogeneity of the four strategies was observed across age-gender subgroups. Eight influential parameters were identified each with their specific thresholds to define the choice of optimal strategy. Accounting for the model uncertainties, the probability that the annual surveillance is the optimal strategy in Singapore was 44.5%. Endoscopic surveillance is potentially cost-effective in the prevention of GC for populations at low to intermediate risk. Regarding program implementation, a detailed analysis of influential factors and their associated thresholds is necessary. Multiple strategies should be considered in order to recommend the right strategy for the right population.
Opinion control in complex networks
NASA Astrophysics Data System (ADS)
Masuda, Naoki
2015-03-01
In many political elections, the electorate appears to be a composite of partisan and independent voters. Given that partisans are not likely to convert to a different party, an important goal for a political party could be to mobilize independent voters toward the party with the help of strong leadership, mass media, partisans, and the effects of peer-to-peer influence. Based on the exact solution of classical voter model dynamics in the presence of perfectly partisan voters (i.e., zealots), we propose a computational method that uses pinning control strategy to maximize the share of a party in a social network of independent voters. The party, corresponding to the controller or zealots, optimizes the nodes to be controlled given the information about the connectivity of independent voters and the set of nodes that the opposing party controls. We show that controlling hubs is generally a good strategy, but the optimized strategy is even better. The superiority of the optimized strategy is particularly eminent when the independent voters are connected as directed (rather than undirected) networks.
Mondal, Milon; Radeva, Nedyalka; Fanlo‐Virgós, Hugo; Otto, Sijbren; Klebe, Gerhard
2016-01-01
Abstract Fragment‐based drug design (FBDD) affords active compounds for biological targets. While there are numerous reports on FBDD by fragment growing/optimization, fragment linking has rarely been reported. Dynamic combinatorial chemistry (DCC) has become a powerful hit‐identification strategy for biological targets. We report the synergistic combination of fragment linking and DCC to identify inhibitors of the aspartic protease endothiapepsin. Based on X‐ray crystal structures of endothiapepsin in complex with fragments, we designed a library of bis‐acylhydrazones and used DCC to identify potent inhibitors. The most potent inhibitor exhibits an IC50 value of 54 nm, which represents a 240‐fold improvement in potency compared to the parent hits. Subsequent X‐ray crystallography validated the predicted binding mode, thus demonstrating the efficiency of the combination of fragment linking and DCC as a hit‐identification strategy. This approach could be applied to a range of biological targets, and holds the potential to facilitate hit‐to‐lead optimization. PMID:27400756
Design optimization of a prescribed vibration system using conjoint value analysis
NASA Astrophysics Data System (ADS)
Malinga, Bongani; Buckner, Gregory D.
2016-12-01
This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.
Ancient village fire escape path planning based on improved ant colony algorithm
NASA Astrophysics Data System (ADS)
Xia, Wei; Cao, Kang; Hu, QianChuan
2017-06-01
The roadways are narrow and perplexing in ancient villages, it brings challenges and difficulties for people to choose route to escape when a fire occurs. In this paper, a fire escape path planning method based on ant colony algorithm is presented according to the problem. The factors in the fire environment which influence the escape speed is introduced to improve the heuristic function of the algorithm, optimal transfer strategy, and adjustment pheromone volatile factor to improve pheromone update strategy adaptively, improve its dynamic search ability and search speed. Through simulation, the dynamic adjustment of the optimal escape path is obtained, and the method is proved to be feasible.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.
1991-01-01
A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
He, Li; Xu, Zongda; Fan, Xing; Li, Jing; Lu, Hongwei
2017-05-01
This study develops a meta-modeling based mathematical programming approach with flexibility in environmental standards. It integrates numerical simulation, meta-modeling analysis, and fuzzy programming within a general framework. A set of models between remediation strategies and remediation performance can well guarantee the mitigation in computational efforts in the simulation and optimization process. In order to prevent the occurrence of over-optimistic and pessimistic optimization strategies, a high satisfaction level resulting from the implementation of a flexible standard can indicate the degree to which the environmental standard is satisfied. The proposed approach is applied to a naphthalene-contaminated site in China. Results show that a longer remediation period corresponds to a lower total pumping rate and a stringent risk standard implies a high total pumping rate. The wells located near or in the down-gradient direction to the contaminant sources have the most significant efficiency among all of remediation schemes.
Systematic design for trait introgression projects.
Cameron, John N; Han, Ye; Wang, Lizhi; Beavis, William D
2017-10-01
Using an Operations Research approach, we demonstrate design of optimal trait introgression projects with respect to competing objectives. We demonstrate an innovative approach for designing Trait Introgression (TI) projects based on optimization principles from Operations Research. If the designs of TI projects are based on clear and measurable objectives, they can be translated into mathematical models with decision variables and constraints that can be translated into Pareto optimality plots associated with any arbitrary selection strategy. The Pareto plots can be used to make rational decisions concerning the trade-offs between maximizing the probability of success while minimizing costs and time. The systematic rigor associated with a cost, time and probability of success (CTP) framework is well suited to designing TI projects that require dynamic decision making. The CTP framework also revealed that previously identified 'best' strategies can be improved to be at least twice as effective without increasing time or expenses.
An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet
NASA Astrophysics Data System (ADS)
Jin, Weixia; Han, Jun
2018-01-01
Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.
Marsot, Maud; Rautureau, Séverine; Dufour, Barbara; Durand, Benoit
2014-01-01
Comparison of control strategies against animal infectious diseases allows determining optimal strategies according to their epidemiological and/or economic impacts. However, in real life, the choice of a control strategy does not always obey a pure economic or epidemiological rationality. The objective of this study was to analyze the choice of a foot and mouth disease (FMD) control strategy as a decision-making process in which the decision-maker is influenced by several stakeholders (government, agro-food industries, public opinion). For each of these, an indicator of epizootic impact was quantified to compare seven control strategies. We then determined how, in France, the optimal control strategy varied according to the relative weights of stakeholders and to the perception of risk by the decision-maker (risk-neutral/risk-averse). When the scope of decision was national, whatever their perception of risk and the stakeholders' weights, decision-makers chose a strategy based on vaccination. This consensus concealed marked differences between regions, which were connected with the regional breeding characteristics. Vaccination-based strategies were predominant in regions with dense cattle and swine populations, and in regions with a dense population of small ruminants, combined with a medium density of cattle and swine. These differences between regions suggested that control strategies could be usefully adapted to local breeding conditions. We then analyzed the feasibility of adaptive decision-making processes depending on the date and place where the epizootic starts, or on the evolution of the epizootic over time. The initial conditions always explained at least half of the variance of impacts, the remaining variance being attributed to the variability of epizootics evolution. However, the first weeks of this evolution explained a large part of the impacts variability. Although the predictive value of the initial conditions for determining the optimal strategy was weak, adaptive strategies changing dynamically according to the evolution of the epizootic appeared feasible.
A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).
Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-01-01
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850
Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-11-25
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.
Design and implementation of real-time wireless projection system based on ARM embedded system
NASA Astrophysics Data System (ADS)
Long, Zhaohua; Tang, Hao; Huang, Junhua
2018-04-01
Aiming at the shortage of existing real-time screen sharing system, a real-time wireless projection system is proposed in this paper. Based on the proposed system, a weight-based frame deletion strategy combined sampling time period and data variation is proposed. By implementing the system on the hardware platform, the results show that the system can achieve good results. The weight-based strategy can improve the service quality, reduce the delay and optimize the real-time customer service system [1].
Lo, Nathan C; Gurarie, David; Yoon, Nara; Coulibaly, Jean T; Bendavid, Eran; Andrews, Jason R; King, Charles H
2018-01-23
Schistosomiasis is a parasitic disease that affects over 240 million people globally. To improve population-level disease control, there is growing interest in adding chemical-based snail control interventions to interrupt the lifecycle of Schistosoma in its snail host to reduce parasite transmission. However, this approach is not widely implemented, and given environmental concerns, the optimal conditions for when snail control is appropriate are unclear. We assessed the potential impact and cost-effectiveness of various snail control strategies. We extended previously published dynamic, age-structured transmission and cost-effectiveness models to simulate mass drug administration (MDA) and focal snail control interventions against Schistosoma haematobium across a range of low-prevalence (5-20%) and high-prevalence (25-50%) rural Kenyan communities. We simulated strategies over a 10-year period of MDA targeting school children or entire communities, snail control, and combined strategies. We measured incremental cost-effectiveness in 2016 US dollars per disability-adjusted life year and defined a strategy as optimally cost-effective when maximizing health gains (averted disability-adjusted life years) with an incremental cost-effectiveness below a Kenya-specific economic threshold. In both low- and high-prevalence settings, community-wide MDA with additional snail control reduced total disability by an additional 40% compared with school-based MDA alone. The optimally cost-effective scenario included the addition of snail control to MDA in over 95% of simulations. These results support inclusion of snail control in global guidelines and national schistosomiasis control strategies for optimal disease control, especially in settings with high prevalence, "hot spots" of transmission, and noncompliance to MDA. Copyright © 2018 the Author(s). Published by PNAS.
Yoon, Nara; Coulibaly, Jean T.; Bendavid, Eran; Andrews, Jason R.; King, Charles H.
2018-01-01
Schistosomiasis is a parasitic disease that affects over 240 million people globally. To improve population-level disease control, there is growing interest in adding chemical-based snail control interventions to interrupt the lifecycle of Schistosoma in its snail host to reduce parasite transmission. However, this approach is not widely implemented, and given environmental concerns, the optimal conditions for when snail control is appropriate are unclear. We assessed the potential impact and cost-effectiveness of various snail control strategies. We extended previously published dynamic, age-structured transmission and cost-effectiveness models to simulate mass drug administration (MDA) and focal snail control interventions against Schistosoma haematobium across a range of low-prevalence (5–20%) and high-prevalence (25–50%) rural Kenyan communities. We simulated strategies over a 10-year period of MDA targeting school children or entire communities, snail control, and combined strategies. We measured incremental cost-effectiveness in 2016 US dollars per disability-adjusted life year and defined a strategy as optimally cost-effective when maximizing health gains (averted disability-adjusted life years) with an incremental cost-effectiveness below a Kenya-specific economic threshold. In both low- and high-prevalence settings, community-wide MDA with additional snail control reduced total disability by an additional 40% compared with school-based MDA alone. The optimally cost-effective scenario included the addition of snail control to MDA in over 95% of simulations. These results support inclusion of snail control in global guidelines and national schistosomiasis control strategies for optimal disease control, especially in settings with high prevalence, “hot spots” of transmission, and noncompliance to MDA. PMID:29301964
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
The Next Breakthrough for Organic Photovoltaics?
Jackson, Nicholas E; Savoie, Brett M; Marks, Tobin J; Chen, Lin X; Ratner, Mark A
2015-01-02
While the intense focus on energy level tuning in organic photovoltaic materials has afforded large gains in device performance, we argue here that strategies based on microstructural/morphological control are at least as promising in any rational design strategy. In this work, a meta-analysis of ∼150 bulk heterojunction devices fabricated with different materials combinations is performed and reveals strong correlations between power conversion efficiency and morphology-dominated properties (short-circuit current, fill factor) and surprisingly weak correlations between efficiency and energy level positioning (open-circuit voltage, enthalpic offset at the interface, optical gap). While energy level positioning should in principle provide the theoretical maximum efficiency, the optimization landscape that must be navigated to reach this maximum is unforgiving. Thus, research aimed at developing understanding-based strategies for more efficient optimization of an active layer microstructure and morphology are likely to be at least as fruitful.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Su, Weixing; Chen, Hanning; Liu, Fang; Lin, Na; Jing, Shikai; Liang, Xiaodan; Liu, Wei
2017-03-01
There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell's pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
NASA Astrophysics Data System (ADS)
Wehner, William; Schuster, Eugenio; Poli, Francesca
2016-10-01
Initial progress towards the design of non-inductive current ramp-up scenarios in the National Spherical Torus Experiment Upgrade (NSTX-U) has been made through the use of TRANSP predictive simulations. The strategy involves, first, ramping the plasma current with high harmonic fast waves (HHFW) to about 400 kA, and then further ramping to 900 kA with neutral beam injection (NBI). However, the early ramping of neutral beams and application of HHFW leads to an undesirably peaked current profile making the plasma unstable to ballooning modes. We present an optimization-based control approach to improve on the non-inductive ramp-up strategy. We combine the TRANSP code with an optimization algorithm based on sequential quadratic programming to search for time evolutions of the NBI powers, the HHFW powers, and the line averaged density that define an open-loop actuator strategy that maximizes the non-inductive current while satisfying constraints associated with the current profile evolution for MHD stable plasmas. This technique has the potential of playing a critical role in achieving robustly stable non-inductive ramp-up, which will ultimately be necessary to demonstrate applicability of the spherical torus concept to larger devices without sufficient room for a central coil. Supported by the US DOE under the SCGSR Program.
Optimism, coping and long-term recovery from coronary artery surgery in women.
King, K B; Rowe, M A; Kimble, L P; Zerwic, J J
1998-02-01
Optimism, coping strategies, and psychological and functional outcomes were measured in 55 women undergoing coronary artery surgery. Data were collected in-hospital and at 1, 6, and 12 months after surgery. Optimism was related to positive moods and life satisfaction, and inversely related to negative moods. Few relationships were found between optimism and functional ability. Cognitive coping strategies accounted for a mediating effect between optimism and negative mood. Optimists were more likely to accept their situation, and less likely to use escapism. In turn, these coping strategies were inversely related to negative mood and mediated the relationship between optimism and this outcome. Optimism was not related to problem-focused coping strategies; this, these coping strategies cannot explain the relationship between optimism and outcomes.
Figueroa-Torres, Gonzalo M; Pittman, Jon K; Theodoropoulos, Constantinos
2017-10-01
Microalgal starch and lipids, carbon-based storage molecules, are useful as potential biofuel feedstocks. In this work, cultivation strategies maximising starch and lipid formation were established by developing a multi-parameter kinetic model describing microalgal growth as well as starch and lipid formation, in conjunction with laboratory-scale experiments. Growth dynamics are driven by nitrogen-limited mixotrophic conditions, known to increase cellular starch and lipid contents whilst enhancing biomass growth. Model parameters were computed by fitting model outputs to a range of experimental datasets from batch cultures of Chlamydomonas reinhardtii. Predictive capabilities of the model were established against different experimental data. The model was subsequently used to compute optimal nutrient-based cultivation strategies in terms of initial nitrogen and carbon concentrations. Model-based optimal strategies yielded a significant increase of 261% for starch (0.065gCL -1 ) and 66% for lipid (0.08gCL -1 ) production compared to base-case conditions (0.018gCL -1 starch, 0.048gCL -1 lipids). Copyright © 2017 Elsevier Ltd. All rights reserved.
Robust Design Optimization via Failure Domain Bounding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2007-01-01
This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.
Li, Nailu; Mu, Anle; Yang, Xiyun; Magar, Kaman T; Liu, Chao
2018-05-01
The optimal tuning of adaptive flap controller can improve adaptive flap control performance on uncertain operating environments, but the optimization process is usually time-consuming and it is difficult to design proper optimal tuning strategy for the flap control system (FCS). To solve this problem, a novel adaptive flap controller is designed based on a high-efficient differential evolution (DE) identification technique and composite adaptive internal model control (CAIMC) strategy. The optimal tuning can be easily obtained by DE identified inverse of the FCS via CAIMC structure. To achieve fast tuning, a high-efficient modified adaptive DE algorithm is proposed with new mutant operator and varying range adaptive mechanism for the FCS identification. A tradeoff between optimized adaptive flap control and low computation cost is successfully achieved by proposed controller. Simulation results show the robustness of proposed method and its superiority to conventional adaptive IMC (AIMC) flap controller and the CAIMC flap controllers using other DE algorithms on various uncertain operating conditions. The high computation efficiency of proposed controller is also verified based on the computation time on those operating cases. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria
2008-11-01
Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.
Optimization-Based Selection of Influential Agents in a Rural Afghan Social Network
2010-06-01
nonlethal targeting model, a nonlinear programming ( NLP ) optimization formulation that identifies the k US agent assignment strategy producing the greatest...leader social network, and 3) the nonlethal targeting model, a nonlinear programming ( NLP ) optimization formulation that identifies the k US agent...NATO Coalition in Afghanistan. 55 for Afghanistan ( [54], [31], [48], [55], [30]). While Arab tribes tend to be more hierarchical, Pashtun tribes are
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less
Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization
Liu, Guodong; Xu, Yan; Tomsovic, Kevin
2016-01-01
In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
to rapidly test /screen breast cancer therapeutics as a strategy to streamline drug development and provide individualized treatment. The results...system can therefore be used to streamline pre-clinical drug development, by reducing the number of animals , cost, and time required to screen new drugs
Another Fine MeSH: Clinical Medicine Meets Information Science.
ERIC Educational Resources Information Center
O'Rourke, Alan; Booth, Andrew; Ford, Nigel
1999-01-01
Discusses evidence-based medicine (EBM) and the need for systematic use of databases like MEDLINE with more sophisticated search strategies to optimize the retrieval of relevant papers. Describes an empirical study of hospital libraries that examined requests for information and search strategies using both structured and unstructured forms.…
Jakobson, Christopher M; Tullman-Ercek, Danielle; Mangan, Niall M
2018-05-29
Natural biochemical systems are ubiquitously organized both in space and time. Engineering the spatial organization of biochemistry has emerged as a key theme of synthetic biology, with numerous technologies promising improved biosynthetic pathway performance. One strategy, however, may produce disparate results for different biosynthetic pathways. We use a spatially resolved kinetic model to explore this fundamental design choice in systems and synthetic biology. We predict that two example biosynthetic pathways have distinct optimal organization strategies that vary based on pathway-dependent and cell-extrinsic factors. Moreover, we demonstrate that the optimal design varies as a function of kinetic and biophysical properties, as well as culture conditions. Our results suggest that organizing biosynthesis has the potential to substantially improve performance, but that choosing the appropriate strategy is key. The flexible design-space analysis we propose can be adapted to diverse biosynthetic pathways, and lays a foundation to rationally choose organization strategies for biosynthesis.
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
Szramka-Pawlak, B; Dańczak-Pazdrowska, A; Rzepa, T; Szewczyk, A; Sadowska-Przytocka, A; Żaba, R
2013-01-01
The clinical course of localized scleroderma may consist of bodily deformations, and bodily functions may also be affected. Additionally, the secondary lesions, such as discoloration, contractures, and atrophy, are unlikely to regress. The aforementioned symptoms and functional disturbances may decrease one's quality of life (QoL). Although much has been mentioned in the medical literature regarding QoL in persons suffering from dermatologic diseases, no data specifically describing patients with localized scleroderma exist. The aim of the study was to explore QoL in localized scleroderma patients and to examine their coping strategies in regard to optimism and QoL. The study included 41 patients with localized scleroderma. QoL was evaluated using the SKINDEX questionnaire, and levels of dispositional optimism were assessed using the Life Orientation Test-Revised. In addition, individual coping strategy was determined using the Mini-MAC scale and physical condition was assessed using the Localized Scleroderma Severity Index. The mean QoL score amounted to 51.10 points, with mean scores for individual components as follows: symptoms = 13.49 points, emotions = 21.29 points, and functioning = 16.32 points. A relationship was detected between QoL and the level of dispositional optimism as well as with coping strategies known as anxious preoccupation and helplessness-hopelessness. Higher levels of optimism predicted a higher general QoL. In turn, greater intensity of anxious preoccupied and helpless-hopeless behaviors predicted a lower QoL. Based on these results, it may be stated that localized scleroderma patients have a relatively high QoL, which is accompanied by optimism as well as a lower frequency of behaviors typical of emotion-focused coping strategies.
An Optimization-Based State Estimatioin Framework for Large-Scale Natural Gas Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jalving, Jordan; Zavala, Victor M.
We propose an optimization-based state estimation framework to track internal spacetime flow and pressure profiles of natural gas networks during dynamic transients. We find that the estimation problem is ill-posed (because of the infinite-dimensional nature of the states) and that this leads to instability of the estimator when short estimation horizons are used. To circumvent this issue, we propose moving horizon strategies that incorporate prior information. In particular, we propose a strategy that initializes the prior using steady-state information and compare its performance against a strategy that does not initialize the prior. We find that both strategies are capable ofmore » tracking the state profiles but we also find that superior performance is obtained with steady-state prior initialization. We also find that, under the proposed framework, pressure sensor information at junctions is sufficient to track the state profiles. We also derive approximate transport models and show that some of these can be used to achieve significant computational speed-ups without sacrificing estimation performance. We show that the estimator can be easily implemented in the graph-based modeling framework Plasmo.jl and use a multipipeline network study to demonstrate the developments.« less
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-01-01
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115
Laser Measurements Based for Volumetric Accuracy Improvement of Multi-axis Systems
NASA Astrophysics Data System (ADS)
Vladimir, Sokolov; Konstantin, Basalaev
The paper describes a new developed approach to CNC-controlled multi-axis systems geometric errors compensation based on optimal error correction strategy. Multi-axis CNC-controlled systems - machine-tools and CMM's are the basis of modern engineering industry. Similar design principles of both technological and measurement equipment allow usage of similar approaches to precision management. The approach based on geometric errors compensation are widely used at present time. The paper describes a system for compensation of geometric errors of multi-axis equipment based on the new approach. The hardware basis of the developed system is a multi-function laser interferometer. The principles of system's implementation, results of measurements and system's functioning simulation are described. The effectiveness of application of described principles to multi-axis equipment of different sizes and purposes for different machining directions and zones within workspace is presented. The concepts of optimal correction strategy is introduced and dynamic accuracy control is proposed.
In Silico Constraint-Based Strain Optimization Methods: the Quest for Optimal Cell Factories
Maia, Paulo; Rocha, Miguel
2015-01-01
SUMMARY Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed. PMID:26609052
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2011-09-01
A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.
Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm
NASA Astrophysics Data System (ADS)
Mahdavi, Seyed Hossein; Razak, Hashim Abdul
2016-06-01
This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Cost related sensitivity analysis for optimal operation of a grid-parallel PEM fuel cell power plant
NASA Astrophysics Data System (ADS)
El-Sharkh, M. Y.; Tanrioven, M.; Rahman, A.; Alam, M. S.
Fuel cell power plants (FCPP) as a combined source of heat, power and hydrogen (CHP&H) can be considered as a potential option to supply both thermal and electrical loads. Hydrogen produced from the FCPP can be stored for future use of the FCPP or can be sold for profit. In such a system, tariff rates for purchasing or selling electricity, the fuel cost for the FCPP/thermal load, and hydrogen selling price are the main factors that affect the operational strategy. This paper presents a hybrid evolutionary programming and Hill-Climbing based approach to evaluate the impact of change of the above mentioned cost parameters on the optimal operational strategy of the FCPP. The optimal operational strategy of the FCPP for different tariffs is achieved through the estimation of the following: hourly generated power, the amount of thermal power recovered, power trade with the local grid, and the quantity of hydrogen that can be produced. Results show the importance of optimizing system cost parameters in order to minimize overall operating cost.
NASA Astrophysics Data System (ADS)
Sanghyun, Ahn; Seungwoong, Ha; Kim, Soo Yong
2016-06-01
A vital challenge for many socioeconomic systems is determining the optimum use of limited information. Traffic systems, wherein the range of resources is limited, are a particularly good example of this challenge. Based on bounded information accessibility in terms of, for example, high costs or technical limitations, we develop a new optimization strategy to improve the efficiency of a traffic system with signals and intersections. Numerous studies, including the study by Chowdery and Schadschneider (whose method we denote by ChSch), have attempted to achieve the maximum vehicle speed or the minimum wait time for a given traffic condition. In this paper, we introduce a modified version of ChSch with an independently functioning, decentralized control system. With the new model, we determine the optimization strategy under bounded information accessibility, which proves the existence of an optimal point for phase transitions in the system. The paper also provides insight that can be applied by traffic engineers to create more efficient traffic systems by analyzing the area and symmetry of local sites. We support our results with a statistical analysis using empirical traffic data from Seoul, Korea.
Study on the Control Strategy of Ground Source Heat Pump of Complex Buildings
NASA Astrophysics Data System (ADS)
Dandan, Zhang; Wei, Li; Siyi, Tang
2018-05-01
The complex building group is a building group which integrates residential, business and office. Study on the operation of buried tube heat exchanger (BHE) with 30%, 50%, 70% and 100% occupancy rate by numerical simulation under the condition of full operation of the business and office, the optimal operation control strategy of a hybrid ground-source heat pump (HGSHP) system with different occupancy rates can be obtained. The results show that: at low occupancy rate the optimal operation control of the heat pump system is to use the cooling tower in the valley load period (June and September) and the heat absorption of the buried tube in winter; While at high occupancy rates, opening the cooling tower when the temperature of the outlet of the BHE is 2 degrees centigrade higher than the temperature of the wet bulb at the corresponding time is the optimal operating strategy. This paper is based on the annual energy consumption and optimization of soil temperature rise, which has an important guideline value for the design and operation of HGSHP system in complex buildings.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Jin, Chunlian; Balducci, Patrick J.
2013-12-01
This volume presents the battery storage evaluation tool developed at Pacific Northwest National Laboratory (PNNL), which is used to evaluate benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This volume provide backgroundmore » and manual for this evaluation tool.« less
Xu, Zixiang; Zheng, Ping; Sun, Jibin; Ma, Yanhe
2013-01-01
Gene knockout has been used as a common strategy to improve microbial strains for producing chemicals. Several algorithms are available to predict the target reactions to be deleted. Most of them apply mixed integer bi-level linear programming (MIBLP) based on metabolic networks, and use duality theory to transform bi-level optimization problem of large-scale MIBLP to single-level programming. However, the validity of the transformation was not proved. Solution of MIBLP depends on the structure of inner problem. If the inner problem is continuous, Karush-Kuhn-Tucker (KKT) method can be used to reformulate the MIBLP to a single-level one. We adopt KKT technique in our algorithm ReacKnock to attack the intractable problem of the solution of MIBLP, demonstrated with the genome-scale metabolic network model of E. coli for producing various chemicals such as succinate, ethanol, threonine and etc. Compared to the previous methods, our algorithm is fast, stable and reliable to find the optimal solutions for all the chemical products tested, and able to provide all the alternative deletion strategies which lead to the same industrial objective. PMID:24348984
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
The extension of the thermal-vacuum test optimization program to multiple flights
NASA Technical Reports Server (NTRS)
Williams, R. E.; Byrd, J.
1981-01-01
The thermal vacuum test optimization model developed to provide an approach to the optimization of a test program based on prediction of flight performance with a single flight option in mind is extended to consider reflight as in space shuttle missions. The concept of 'utility', developed under the name of 'availability', is used to follow performance through the various options encountered when the capabilities of reflight and retrievability of space shuttle are available. Also, a 'lost value' model is modified to produce a measure of the probability of a mission's success, achieving a desired utility using a minimal cost test strategy. The resulting matrix of probabilities and their associated costs provides a means for project management to evaluate various test and reflight strategies.
Mahmoud, Amr Hamed; Mohamed Abouzid, Khaled Abouzid; El Ella, Dalal Abd El Rahman Abou; Hamid Ismail, Mohamed Abdel
2011-01-01
Infection caused by hepatitis C virus (HCV) is a significant world health problem for which novel therapies are in urgent demand. The virus is highly prevalent in the Middle East and Africa particularly Egypt with more than 90% of infections due to genotype 4. Nonstructural (NS5B) viral proteins have emerged as an attractive target for HCV antivirals discovery. A potent class of inhibitors having benzisothiazole dioxide scaffold has been identified on this target, however they were mainly active on genotype 1 while exhibiting much lowered activity on other genotypes due to the high degree of mutation of its binding site. Based on this fact, we employed a novel strategy to optimize this class on genotype 4. This strategy depends on using a refined ligand-steered homological model of this genotype to study the mutation binding energies of the binding site amino acid residues, the essential features for interaction and provide a structure-based pharmacophore model that can aid optimization. This model was applied on a focused library which was generated using a reaction-driven scaffold-hopping strategy. The hits retrieved were subjected to Enovo pipeline pilot optimization workflow that employs R-group enumeration, core-constrained protein docking using modified CDOCKER and finally ranking of poses using an accurate molecular mechanics generalized Born with surface area method.
Fractal profit landscape of the stock market.
Grönlund, Andreas; Yi, Il Gu; Kim, Beom Jun
2012-01-01
We investigate the structure of the profit landscape obtained from the most basic, fluctuation based, trading strategy applied for the daily stock price data. The strategy is parameterized by only two variables, p and q Stocks are sold and bought if the log return is bigger than p and less than -q, respectively. Repetition of this simple strategy for a long time gives the profit defined in the underlying two-dimensional parameter space of p and q. It is revealed that the local maxima in the profit landscape are spread in the form of a fractal structure. The fractal structure implies that successful strategies are not localized to any region of the profit landscape and are neither spaced evenly throughout the profit landscape, which makes the optimization notoriously hard and hypersensitive for partial or limited information. The concrete implication of this property is demonstrated by showing that optimization of one stock for future values or other stocks renders worse profit than a strategy that ignores fluctuations, i.e., a long-term buy-and-hold strategy.
NASA Astrophysics Data System (ADS)
Gorzelic, P.; Schiff, S. J.; Sinha, A.
2013-04-01
Objective. To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). Approach. A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. Main Results. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Significance. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.
Gorzelic, P; Schiff, S J; Sinha, A
2013-04-01
To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Sunarsih; Kartono
2018-01-01
In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
NASA Astrophysics Data System (ADS)
Métivier, L.; Brossier, R.; Mérigot, Q.; Oudet, E.; Virieux, J.
2016-04-01
Full waveform inversion using the conventional L2 distance to measure the misfit between seismograms is known to suffer from cycle skipping. An alternative strategy is proposed in this study, based on a measure of the misfit computed with an optimal transport distance. This measure allows to account for the lateral coherency of events within the seismograms, instead of considering each seismic trace independently, as is done generally in full waveform inversion. The computation of this optimal transport distance relies on a particular mathematical formulation allowing for the non-conservation of the total energy between seismograms. The numerical solution of the optimal transport problem is performed using proximal splitting techniques. Three synthetic case studies are investigated using this strategy: the Marmousi 2 model, the BP 2004 salt model, and the Chevron 2014 benchmark data. The results emphasize interesting properties of the optimal transport distance. The associated misfit function is less prone to cycle skipping. A workflow is designed to reconstruct accurately the salt structures in the BP 2004 model, starting from an initial model containing no information about these structures. A high-resolution P-wave velocity estimation is built from the Chevron 2014 benchmark data, following a frequency continuation strategy. This estimation explains accurately the data. Using the same workflow, full waveform inversion based on the L2 distance converges towards a local minimum. These results yield encouraging perspectives regarding the use of the optimal transport distance for full waveform inversion: the sensitivity to the accuracy of the initial model is reduced, the reconstruction of complex salt structure is made possible, the method is robust to noise, and the interpretation of seismic data dominated by reflections is enhanced.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.
Park, Jin Hwan; Kim, Tae Yong; Lee, Kwang Ho; Lee, Sang Yup
2011-04-01
We have previously reported the development of a 100% genetically defined engineered Escherichia coli strain capable of producing L-valine from glucose with a high yield of 0.38 g L-valine per gram glucose (0.58 mol L-valine per mol glucose) by batch culture. Here we report a systems biological strategy of employing flux response analysis in bioprocess development using L-valine production by fed-batch culture as an example. Through the systems-level analysis, the source of ATP was found to be important for efficient L-valine production. There existed a trade-off between L-valine production and biomass formation, which was optimized for the most efficient L-valine production. Furthermore, acetic acid feeding strategy was optimized based on flux response analysis. The final fed-batch cultivation strategy allowed production of 32.3 g/L L-valine, the highest concentration reported for E. coli. This approach of employing systems-level analysis of metabolic fluxes in developing fed-batch cultivation strategy would also be applicable in developing strategies for the efficient production of other bioproducts. Copyright © 2010 Wiley Periodicals, Inc.
Multiple UAV Cooperation for Wildfire Monitoring
NASA Astrophysics Data System (ADS)
Lin, Zhongjie
Wildfires have been a major factor in the development and management of the world's forest. An accurate assessment of wildfire status is imperative for fire management. This thesis is dedicated to the topic of utilizing multiple unmanned aerial vehicles (UAVs) to cooperatively monitor a large-scale wildfire. This is achieved through wildfire spreading situation estimation based on on-line measurements and wise cooperation strategy to ensure efficiency. First, based on the understanding of the physical characteristics of the wildfire propagation behavior, a wildfire model and a Kalman filter-based method are proposed to estimate the wildfire rate of spread and the fire front contour profile. With the enormous on-line measurements from on-board sensors of UAVs, the proposed method allows a wildfire monitoring mission to benefit from on-line information updating, increased flexibility, and accurate estimation. An independent wildfire simulator is utilized to verify the effectiveness of the proposed method. Second, based on the filter analysis, wildfire spreading situation and vehicle dynamics, the influence of different cooperation strategies of UAVs to the overall mission performance is studied. The multi-UAV cooperation problem is formulated in a distributed network. A consensus-based method is proposed to help address the problem. The optimal cooperation strategy of UAVs is obtained through mathematical analysis. The derived optimal cooperation strategy is then verified in an independent fire simulation environment to verify its effectiveness.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Mondal, Milon; Radeva, Nedyalka; Fanlo-Virgós, Hugo; Otto, Sijbren; Klebe, Gerhard; Hirsch, Anna K H
2016-08-01
Fragment-based drug design (FBDD) affords active compounds for biological targets. While there are numerous reports on FBDD by fragment growing/optimization, fragment linking has rarely been reported. Dynamic combinatorial chemistry (DCC) has become a powerful hit-identification strategy for biological targets. We report the synergistic combination of fragment linking and DCC to identify inhibitors of the aspartic protease endothiapepsin. Based on X-ray crystal structures of endothiapepsin in complex with fragments, we designed a library of bis-acylhydrazones and used DCC to identify potent inhibitors. The most potent inhibitor exhibits an IC50 value of 54 nm, which represents a 240-fold improvement in potency compared to the parent hits. Subsequent X-ray crystallography validated the predicted binding mode, thus demonstrating the efficiency of the combination of fragment linking and DCC as a hit-identification strategy. This approach could be applied to a range of biological targets, and holds the potential to facilitate hit-to-lead optimization. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
NASA Astrophysics Data System (ADS)
Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan
2018-05-01
Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.
Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui
2017-01-01
A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Multi-strategy based quantum cost reduction of linear nearest-neighbor quantum circuit
NASA Astrophysics Data System (ADS)
Tan, Ying-ying; Cheng, Xue-yun; Guan, Zhi-jin; Liu, Yang; Ma, Haiying
2018-03-01
With the development of reversible and quantum computing, study of reversible and quantum circuits has also developed rapidly. Due to physical constraints, most quantum circuits require quantum gates to interact on adjacent quantum bits. However, many existing quantum circuits nearest-neighbor have large quantum cost. Therefore, how to effectively reduce quantum cost is becoming a popular research topic. In this paper, we proposed multiple optimization strategies to reduce the quantum cost of the circuit, that is, we reduce quantum cost from MCT gates decomposition, nearest neighbor and circuit simplification, respectively. The experimental results show that the proposed strategies can effectively reduce the quantum cost, and the maximum optimization rate is 30.61% compared to the corresponding results.
NASA Astrophysics Data System (ADS)
Kostrzewa, Daniel; Josiński, Henryk
2016-06-01
The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.
Christodoulides, Panayiotis; Hirata, Yoshito; Domínguez-Hüttinger, Elisa; Danby, Simon G.; Cork, Michael J.; Williams, Hywel C.; Aihara, Kazuyuki
2017-01-01
Atopic dermatitis (AD) is a common chronic skin disease characterized by recurrent skin inflammation and a weak skin barrier, and is known to be a precursor to other allergic diseases such as asthma. AD affects up to 25% of children worldwide and the incidence continues to rise. There is still uncertainty about the optimal treatment strategy in terms of choice of treatment, potency, duration and frequency. This study aims to develop a computational method to design optimal treatment strategies for the clinically recommended ‘proactive therapy’ for AD. Proactive therapy aims to prevent recurrent flares once the disease has been brought under initial control. Typically, this is done by using an anti-inflammatory treatment such as a potent topical corticosteroid intensively for a few weeks to ‘get control’, followed by intermittent weekly treatment to suppress subclinical inflammation to ‘keep control’. Using a hybrid mathematical model of AD pathogenesis that we recently proposed, we computationally derived the optimal treatment strategies for individual virtual patient cohorts, by recursively solving optimal control problems using a differential evolution algorithm. Our simulation results suggest that such an approach can inform the design of optimal individualized treatment schedules that include application of topical corticosteroids and emollients, based on the disease status of patients observed on their weekly hospital visits. We demonstrate the potential and the gaps of our approach to be applied to clinical settings. This article is part of the themed issue ‘Mathematical methods in medicine: neuroscience, cardiology and pathology’. PMID:28507230
Fireworks Algorithm with Enhanced Fireworks Interaction.
Zhang, Bei; Zheng, Yu-Jun; Zhang, Min-Xia; Chen, Sheng-Yong
2017-01-01
As a relatively new metaheuristic in swarm intelligence, fireworks algorithm (FWA) has exhibited promising performance on a wide range of optimization problems. This paper aims to improve FWA by enhancing fireworks interaction in three aspects: 1) Developing a new Gaussian mutation operator to make sparks learn from more exemplars; 2) Integrating the regular explosion operator of FWA with the migration operator of biogeography-based optimization (BBO) to increase information sharing; 3) Adopting a new population selection strategy that enables high-quality solutions to have high probabilities of entering the next generation without incurring high computational cost. The combination of the three strategies can significantly enhance fireworks interaction and thus improve solution diversity and suppress premature convergence. Numerical experiments on the CEC 2015 single-objective optimization test problems show the effectiveness of the proposed algorithm. The application to a high-speed train scheduling problem also demonstrates its feasibility in real-world optimization problems.
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.
3D sensor placement strategy using the full-range pheromone ant colony system
NASA Astrophysics Data System (ADS)
Shuo, Feng; Jingqing, Jia
2016-07-01
An optimized sensor placement strategy will be extremely beneficial to ensure the safety and cost reduction considerations of structural health monitoring (SHM) systems. The sensors must be placed such that important dynamic information is obtained and the number of sensors is minimized. The practice is to select individual sensor directions by several 1D sensor methods and the triaxial sensors are placed in these directions for monitoring. However, this may lead to non-optimal placement of many triaxial sensors. In this paper, a new method, called FRPACS, is proposed based on the ant colony system (ACS) to solve the optimal placement of triaxial sensors. The triaxial sensors are placed as single units in an optimal fashion. And then the new method is compared with other algorithms using Dalian North Bridge. The computational precision and iteration efficiency of the FRPACS has been greatly improved compared with the original ACS and EFI method.
The cost-effectiveness of diagnostic management strategies for adults with minor head injury.
Holmes, M W; Goodacre, S; Stevenson, M D; Pandor, A; Pickering, A
2012-09-01
To estimate the cost-effectiveness of diagnostic management strategies for adults with minor head injury. A mathematical model was constructed to evaluate the incremental costs and effectiveness (Quality Adjusted Life years Gained, QALYs) of ten diagnostic management strategies for adults with minor head injuries. Secondary analyses were undertaken to determine the cost-effectiveness of hospital admission compared to discharge home and to explore the cost-effectiveness of strategies when no responsible adult was available to observe the patient after discharge. The apparent optimal strategy was based on the high and medium risk Canadian CT Head Rule (CCHRhm), although the costs and outcomes associated with each strategy were broadly similar. Hospital admission for patients with non-neurosurgical injury on CT dominated discharge home, whilst hospital admission for clinically normal patients with a normal CT was not cost-effective compared to discharge home with or without a responsible adult at £39 and £2.5 million per QALY, respectively. A selective CT strategy with discharge home if the CT scan was normal remained optimal compared to not investigating or CT scanning all patients when there was no responsible adult available to observe them after discharge. Our economic analysis confirms that the recent extension of access to CT scanning for minor head injury is appropriate. Liberal use of CT scanning based on a high sensitivity decision rule is not only effective but also cost-saving. The cost of CT scanning is very small compared to the estimated cost of caring for patients with brain injury worsened by delayed treatment. It is recommended therefore that all hospitals receiving patients with minor head injury should have unrestricted access to CT scanning for use in conjunction with evidence based guidelines. Provisionally the CCHRhm decision rule appears to be the best strategy although there is considerable uncertainty around the optimal decision rule. However, the CCHRhm rule appears to be the most widely validated and it therefore seems appropriate to conclude that the CCHRhm rule has the best evidence to support its use. Copyright © 2011 Elsevier Ltd. All rights reserved.
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme.
RFID Application Strategy in Agri-Food Supply Chain Based on Safety and Benefit Analysis
NASA Astrophysics Data System (ADS)
Zhang, Min; Li, Peichong
Agri-food supply chain management (SCM), a management method to optimize internal costs and productivities, has evolved as an application of e-business technologies. These days, RFID has been widely used in many fields. In this paper, we analyze the characteristics of agri-food supply chain. Then the disadvantages of RFID are discussed. After that, we study the application strategies of RFID based on benefit and safety degree.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dogan, N; Padgett, K; Evans, J
Purpose: Adaptive Radiotherapy (ART) with frequent CT imaging has been used to improve dosimetric accuracy by accounting for anatomical variations, such as primary tumor shrinkage and/or body weight loss, in Head and Neck (H&N) patients. In most ART strategies, the difference between the planned and the delivered dose is estimated by generating new plans on repeated CT scans using dose-volume constraints used with the initial planning CT without considering already delivered dose. The aim of this study was to assess the dosimetric gains achieved by re-planning based on prior dose by comparing them to re-planning not based-on prior dose formore » H&N patients. Methods: Ten locally-advanced H&N cancer patients were selected for this study. For each patient, six weekly CT imaging were acquired during the course of radiotherapy. PTVs, parotids, cord, brainstem, and esophagus were contoured on both planning and six weekly CT images. ART with weekly re-plans were done by two strategies: 1) Generating a new optimized IMRT plan without including prior dose from previous fractions (NoPriorDose) and 2) Generating a new optimized IMRT plan based on the prior dose given from previous fractions (PriorDose). Deformable image registration was used to accumulate the dose distributions between planning and six weekly CT scans. The differences in accumulated doses for both strategies were evaluated using the DVH constraints for all structures. Results: On average, the differences in accumulated doses for PTV1, PTV2 and PTV3 for NoPriorDose and PriorDose strategies were <2%. The differences in Dmean to the cord and brainstem were within 3%. The esophagus Dmean was reduced by 2% using PriorDose. PriorDose strategy, however, reduced the left parotid D50 and Dmean by 15% and 14% respectively. Conclusion: This study demonstrated significant parotid sparing, potentially reducing xerostomia, by using ART with IMRT optimization based on prior dose for weekly re-planning of H&N cancer patients.« less
Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan
2015-01-01
To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585
Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.
Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing
2016-06-01
The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Carius, Lisa; Rumschinski, Philipp; Faulwasser, Timm; Flockerzi, Dietrich; Grammel, Hartmut; Findeisen, Rolf
2014-04-01
Microaerobic (oxygen-limited) conditions are critical for inducing many important microbial processes in industrial or environmental applications. At very low oxygen concentrations, however, the process performance often suffers from technical limitations. Available dissolved oxygen measurement techniques are not sensitive enough and thus control techniques, that can reliable handle these conditions, are lacking. Recently, we proposed a microaerobic process control strategy, which overcomes these restrictions and allows to assess different degrees of oxygen limitation in bioreactor batch cultivations. Here, we focus on the design of a control strategy for the automation of oxygen-limited continuous cultures using the microaerobic formation of photosynthetic membranes (PM) in Rhodospirillum rubrum as model phenomenon. We draw upon R. rubrum since the considered phenomenon depends on the optimal availability of mixed-carbon sources, hence on boundary conditions which make the process performance challenging. Empirically assessing these specific microaerobic conditions is scarcely practicable as such a process reacts highly sensitive to changes in the substrate composition and the oxygen availability in the culture broth. Therefore, we propose a model-based process control strategy which allows to stabilize steady-states of cultures grown under these conditions. As designing the appropriate strategy requires a detailed knowledge of the system behavior, we begin by deriving and validating an unstructured process model. This model is used to optimize the experimental conditions, and identify properties of the system which are critical for process performance. The derived model facilitates the good process performance via the proposed optimal control strategy. In summary the presented model-based control strategy allows to access and maintain microaerobic steady-states of interest and to precisely and efficiently transfer the culture from one stable microaerobic steady-state into another. Therefore, the presented approach is a valuable tool to study regulatory mechanisms of microaerobic phenomena in response to oxygen limitation alone. Biotechnol. Bioeng. 2014;111: 734-747. © 2013 Wiley Periodicals, Inc. © 2013 Wiley Periodicals, Inc.
Control of wavepacket dynamics in mixed alkali metal clusters by optimally shaped fs pulses
NASA Astrophysics Data System (ADS)
Bartelt, A.; Minemoto, S.; Lupulescu, C.; Vajda, Š.; Wöste, L.
We have performed adaptive feedback optimization of phase-shaped femtosecond laser pulses to control the wavepacket dynamics of small mixed alkali-metal clusters. An optimization algorithm based on Evolutionary Strategies was used to maximize the ion intensities. The optimized pulses for NaK and Na2K converged to pulse trains consisting of numerous peaks. The timing of the elements of the pulse trains corresponds to integer and half integer numbers of the vibrational periods of the molecules, reflecting the wavepacket dynamics in their excited states.
Optimizing Tumor Microenvironment for Cancer Immunotherapy: β-Glucan-Based Nanoparticles
Zhang, Mei; Kim, Julian A.; Huang, Alex Yee-Chen
2018-01-01
Immunotherapy is revolutionizing cancer treatment. Recent clinical success with immune checkpoint inhibitors, chimeric antigen receptor T-cell therapy, and adoptive immune cellular therapies has generated excitement and new hopes for patients and investigators. However, clinically efficacious responses to cancer immunotherapy occur only in a minority of patients. One reason is the tumor microenvironment (TME), which potently inhibits the generation and delivery of optimal antitumor immune responses. As our understanding of TME continues to grow, strategies are being developed to change the TME toward one that augments the emergence of strong antitumor immunity. These strategies include eliminating tumor bulk to provoke the release of tumor antigens, using adjuvants to enhance antigen-presenting cell function, and employ agents that enhance immune cell effector activity. This article reviews the development of β-glucan and β-glucan-based nanoparticles as immune modulators of TME, as well as their potential benefit and future therapeutic applications. Cell-wall β-glucans from natural sources including plant, fungi, and bacteria are molecules that adopt pathogen-associated molecular pattern (PAMP) known to target specific receptors on immune cell subsets. Emerging data suggest that the TME can be actively manipulated by β-glucans and their related nanoparticles. In this review, we discuss the mechanisms of conditioning TME using β-glucan and β-glucan-based nanoparticles, and how this strategy enables future design of optimal combination cancer immunotherapies. PMID:29535722
Challenges with Evidence-Based Management of Stable Ischemic Heart Disease.
Patel, Amit V; Bangalore, Sripal
2017-02-01
Stable ischemic heart disease (SIHD) is a highly prevalent condition associated with increased costs, morbidity, and mortality. Management goals of SIHD can broadly be thought of in terms of improving prognosis and/or improving symptoms. Treatment options include medical therapy as well as revascularization, either with percutaneous coronary intervention or coronary artery bypass grafting. Herein, we will review the current evidence base for treatment of SIHD as well as its challenges and discuss ongoing studies to help address some of these knowledge gaps. There has been no consistent reduction in death or myocardial infarction (MI) with revascularization vs. medical therapy in patients with SIHD in contemporary trials. Angina and quality of life have been shown to be relieved more rapidly with revascularization vs. optimal medical therapy; however, the durability of these results is uncertain. There have been challenges and limitations in several of the trials addressing the optimal treatment strategy for SIHD due to potential selection bias (due to knowledge of coronary anatomy prior to randomization), patient crossover, and advances in medical therapy and revascularization strategies since trial completion. The challenges inherent to prior trials addressing the optimal management strategy for SIHD have impacted the generalizability of results to real-world cohorts. Until the results of additional ongoing trials are available, the decision for revascularization or medical therapy should be based on patients' symptoms, weighing the risks and benefits of each approach, and patient preference.
Control strategy of grid-connected photovoltaic generation system based on GMPPT method
NASA Astrophysics Data System (ADS)
Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen
2018-02-01
There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.
Kleczkowski, Adam; Oleś, Katarzyna; Gudowska-Nowak, Ewa; Gilligan, Christopher A.
2012-01-01
We present a combined epidemiological and economic model for control of diseases spreading on local and small-world networks. The disease is characterized by a pre-symptomatic infectious stage that makes detection and control of cases more difficult. The effectiveness of local (ring-vaccination or culling) and global control strategies is analysed by comparing the net present values of the combined cost of preventive treatment and illness. The optimal strategy is then selected by minimizing the total cost of the epidemic. We show that three main strategies emerge, with treating a large number of individuals (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. If the disease spreads within the well-defined neighbourhood, the local strategy is optimal unless the cost of a single vaccine is much higher than the cost associated with hospitalization. In the latter case, it is most cost-effective to refrain from prevention. Destruction of local correlations, either by long-range (small-world) links or by inclusion of many initial foci, expands the range of costs for which the NS is most cost-effective. The GS emerges for the case when the cost of prevention is much lower than the cost of treatment and there is a substantial non-local component in the disease spread. We also show that local treatment is only desirable if the disease spreads on a small-world network with sufficiently few long-range links; otherwise it is optimal to treat globally. In the mean-field case, there are only two optimal solutions, to treat all if the cost of the vaccine is low and to treat nobody if it is high. The basic reproduction ratio, R0, does not depend on the rate of responsive treatment in this case and the disease always invades (but might be stopped afterwards). The details of the local control strategy, and in particular the optimal size of the control neighbourhood, are determined by the epidemiology of the disease. The properties of the pathogen might not be known in advance for emerging diseases, but the broad choice of the strategy can be made based on economic analysis only. PMID:21653570
Kleczkowski, Adam; Oleś, Katarzyna; Gudowska-Nowak, Ewa; Gilligan, Christopher A
2012-01-07
We present a combined epidemiological and economic model for control of diseases spreading on local and small-world networks. The disease is characterized by a pre-symptomatic infectious stage that makes detection and control of cases more difficult. The effectiveness of local (ring-vaccination or culling) and global control strategies is analysed by comparing the net present values of the combined cost of preventive treatment and illness. The optimal strategy is then selected by minimizing the total cost of the epidemic. We show that three main strategies emerge, with treating a large number of individuals (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. If the disease spreads within the well-defined neighbourhood, the local strategy is optimal unless the cost of a single vaccine is much higher than the cost associated with hospitalization. In the latter case, it is most cost-effective to refrain from prevention. Destruction of local correlations, either by long-range (small-world) links or by inclusion of many initial foci, expands the range of costs for which the NS is most cost-effective. The GS emerges for the case when the cost of prevention is much lower than the cost of treatment and there is a substantial non-local component in the disease spread. We also show that local treatment is only desirable if the disease spreads on a small-world network with sufficiently few long-range links; otherwise it is optimal to treat globally. In the mean-field case, there are only two optimal solutions, to treat all if the cost of the vaccine is low and to treat nobody if it is high. The basic reproduction ratio, R(0), does not depend on the rate of responsive treatment in this case and the disease always invades (but might be stopped afterwards). The details of the local control strategy, and in particular the optimal size of the control neighbourhood, are determined by the epidemiology of the disease. The properties of the pathogen might not be known in advance for emerging diseases, but the broad choice of the strategy can be made based on economic analysis only.
Caffeine dosing strategies to optimize alertness during sleep loss.
Vital-Lopez, Francisco G; Ramakrishnan, Sridhar; Doty, Tracy J; Balkin, Thomas J; Reifman, Jaques
2018-05-28
Sleep loss, which affects about one-third of the US population, can severely impair physical and neurobehavioural performance. Although caffeine, the most widely used stimulant in the world, can mitigate these effects, currently there are no tools to guide the timing and amount of caffeine consumption to optimize its benefits. In this work, we provide an optimization algorithm, suited for mobile computing platforms, to determine when and how much caffeine to consume, so as to safely maximize neurobehavioural performance at the desired time of the day, under any sleep-loss condition. The algorithm is based on our previously validated Unified Model of Performance, which predicts the effect of caffeine consumption on a psychomotor vigilance task. We assessed the algorithm by comparing the caffeine-dosing strategies (timing and amount) it identified with the dosing strategies used in four experimental studies, involving total and partial sleep loss. Through computer simulations, we showed that the algorithm yielded caffeine-dosing strategies that enhanced performance of the predicted psychomotor vigilance task by up to 64% while using the same total amount of caffeine as in the original studies. In addition, the algorithm identified strategies that resulted in equivalent performance to that in the experimental studies while reducing caffeine consumption by up to 65%. Our work provides the first quantitative caffeine optimization tool for designing effective strategies to maximize neurobehavioural performance and to avoid excessive caffeine consumption during any arbitrary sleep-loss condition. © 2018 The Authors. Journal of Sleep Research published by John Wiley & Sons Ltd on behalf of European Sleep Research Society.
Taylor, Stephanie Parks; Karvetski, Colleen H; Templin, Megan A; Heffner, Alan C; Taylor, Brice T
2018-02-01
The optimal initial fluid resuscitation strategy for obese patients with septic shock is unknown. We evaluated fluid resuscitation strategies across BMI groups. Retrospective analysis of 4157 patients in a multicenter activation pathway for treatment of septic shock between 2014 and 2016. 1293 (31.3%) patients were obese (BMI≥30). Overall, higher BMI was associated with lower mortality, however this survival advantage was eliminated in adjusted analyses. Patients with higher BMI received significantly less fluid per kilogram at 3h than did patients with lower BMI (p≤0.001). In obese patients, fluid given at 3h mimicked a dosing strategy based on actual body weight (ABW) in 780 (72.2%), adjusted body weight (AdjBW) in 95 (8.8%), and ideal body weight (IBW) in 205 (19.0%). After adjusting for condition- and treatment-related variables, dosing based on AdjBW was associated with improved mortality compared to ABW (OR 0.45; 95% CI [0.19, 1.07]) and IBW (OR 0.29; 95% CI [0.11,0.74]). Using AdjBW to calculate initial fluid resuscitation volume for obese patients with suspected shock may improve outcomes compared to other weight-based dosing strategies. The optimal fluid dosing strategy for obese patients should be a focus of future prospective research. Copyright © 2017 Elsevier Inc. All rights reserved.
Fredriksson, Mattias J; Petersson, Patrik; Axelsson, Bengt-Olof; Bylund, Dan
2011-10-17
A strategy for rapid optimization of liquid chromatography column temperature and gradient shape is presented. The optimization as such is based on the well established retention and peak width models implemented in software like e.g. DryLab and LC simulator. The novel part of the strategy is a highly automated processing algorithm for detection and tracking of chromatographic peaks in noisy liquid chromatography-mass spectrometry (LC-MS) data. The strategy is presented and visualized by the optimization of the separation of two degradants present in ultraviolet (UV) exposed fluocinolone acetonide. It should be stressed, however, that it can be utilized for LC-MS analysis of any sample and application where several runs are conducted on the same sample. In the application presented, 30 components that were difficult or impossible to detect in the UV data could be automatically detected and tracked in the MS data by using the proposed strategy. The number of correctly tracked components was above 95%. Using the parameters from the reconstructed data sets to the model gave good agreement between predicted and observed retention times at optimal conditions. The area of the smallest tracked component was estimated to 0.08% compared to the main component, a level relevant for the characterization of impurities in the pharmaceutical industry. Copyright © 2011 Elsevier B.V. All rights reserved.
Rands, Sean A.
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour. PMID:21765938
Rands, Sean A
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.
Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process
NASA Astrophysics Data System (ADS)
Yan, Wei; Chang, Yuwen
2016-12-01
Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.
Optimal energy harvesting from vortex-induced vibrations of cables.
Antoine, G O; de Langre, E; Michelin, S
2016-11-01
Vortex-induced vibrations (VIV) of flexible cables are an example of flow-induced vibrations that can act as energy harvesting systems by converting energy associated with the spontaneous cable motion into electricity. This work investigates the optimal positioning of the harvesting devices along the cable, using numerical simulations with a wake oscillator model to describe the unsteady flow forcing. Using classical gradient-based optimization, the optimal harvesting strategy is determined for the generic configuration of a flexible cable fixed at both ends, including the effect of flow forces and gravity on the cable's geometry. The optimal strategy is found to consist systematically in a concentration of the harvesting devices at one of the cable's ends, relying on deformation waves along the cable to carry the energy towards this harvesting site. Furthermore, we show that the performance of systems based on VIV of flexible cables is significantly more robust to flow velocity variations, in comparison with a rigid cylinder device. This results from two passive control mechanisms inherent to the cable geometry: (i) the adaptability to the flow velocity of the fundamental frequencies of cables through the flow-induced tension and (ii) the selection of successive vibration modes by the flow velocity for cables with gravity-induced tension.
Optimal energy harvesting from vortex-induced vibrations of cables
de Langre, E.; Michelin, S.
2016-01-01
Vortex-induced vibrations (VIV) of flexible cables are an example of flow-induced vibrations that can act as energy harvesting systems by converting energy associated with the spontaneous cable motion into electricity. This work investigates the optimal positioning of the harvesting devices along the cable, using numerical simulations with a wake oscillator model to describe the unsteady flow forcing. Using classical gradient-based optimization, the optimal harvesting strategy is determined for the generic configuration of a flexible cable fixed at both ends, including the effect of flow forces and gravity on the cable’s geometry. The optimal strategy is found to consist systematically in a concentration of the harvesting devices at one of the cable’s ends, relying on deformation waves along the cable to carry the energy towards this harvesting site. Furthermore, we show that the performance of systems based on VIV of flexible cables is significantly more robust to flow velocity variations, in comparison with a rigid cylinder device. This results from two passive control mechanisms inherent to the cable geometry: (i) the adaptability to the flow velocity of the fundamental frequencies of cables through the flow-induced tension and (ii) the selection of successive vibration modes by the flow velocity for cables with gravity-induced tension. PMID:27956880
Optimal energy harvesting from vortex-induced vibrations of cables
NASA Astrophysics Data System (ADS)
Antoine, G. O.; de Langre, E.; Michelin, S.
2016-11-01
Vortex-induced vibrations (VIV) of flexible cables are an example of flow-induced vibrations that can act as energy harvesting systems by converting energy associated with the spontaneous cable motion into electricity. This work investigates the optimal positioning of the harvesting devices along the cable, using numerical simulations with a wake oscillator model to describe the unsteady flow forcing. Using classical gradient-based optimization, the optimal harvesting strategy is determined for the generic configuration of a flexible cable fixed at both ends, including the effect of flow forces and gravity on the cable's geometry. The optimal strategy is found to consist systematically in a concentration of the harvesting devices at one of the cable's ends, relying on deformation waves along the cable to carry the energy towards this harvesting site. Furthermore, we show that the performance of systems based on VIV of flexible cables is significantly more robust to flow velocity variations, in comparison with a rigid cylinder device. This results from two passive control mechanisms inherent to the cable geometry: (i) the adaptability to the flow velocity of the fundamental frequencies of cables through the flow-induced tension and (ii) the selection of successive vibration modes by the flow velocity for cables with gravity-induced tension.
Reactive power optimization strategy considering analytical impedance ratio
NASA Astrophysics Data System (ADS)
Wu, Zhongchao; Shen, Weibing; Liu, Jinming; Guo, Maoran; Zhang, Shoulin; Xu, Keqiang; Wang, Wanjun; Sui, Jinlong
2017-05-01
In this paper, considering the traditional reactive power optimization cannot realize the continuous voltage adjustment and voltage stability, a dynamic reactive power optimization strategy is proposed in order to achieve both the minimization of network loss and high voltage stability with wind power. Due to the fact that wind power generation is fluctuant and uncertain, electrical equipments such as transformers and shunt capacitors may be operated frequently in order to achieve minimization of network loss, which affect the lives of these devices. In order to solve this problem, this paper introduces the derivation process of analytical impedance ratio based on Thevenin equivalent. Thus, the multiple objective function is proposed to minimize the network loss and analytical impedance ratio. Finally, taking the improved IEEE 33-bus distribution system as example, the result shows that the movement of voltage control equipment has been reduced and network loss increment is controlled at the same time, which proves the applicable value of this strategy.
On Market-Based Coordination of Thermostatically Controlled Loads With User Preference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
2014-12-15
This paper presents a market-based control framework to coordinate a group of autonomous Thermostatically Controlled Loads (TCL) to achieve the system-level objectives with pricing incentives. The problem is formulated as maximizing the social welfare subject to feeder power constraint. It allows the coordinator to affect the aggregated power of a group of dynamical systems, and creates an interactive market where the users and the coordinator cooperatively determine the optimal energy allocation and energy price. The optimal pricing strategy is derived, which maximizes social welfare while respecting the feeder power constraint. The bidding strategy is also designed to compute the optimalmore » price in real time (e.g., every 5 minutes) based on local device information. The coordination framework is validated with realistic simulations in GridLab-D. Extensive simulation results demonstrate that the proposed approach effectively maximizes the social welfare and decreases power congestion at key times.« less
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Optimized knock-in of point mutations in zebrafish using CRISPR/Cas9.
Prykhozhij, Sergey V; Fuller, Charlotte; Steele, Shelby L; Veinotte, Chansey J; Razaghi, Babak; Robitaille, Johane M; McMaster, Christopher R; Shlien, Adam; Malkin, David; Berman, Jason N
2018-06-14
We have optimized point mutation knock-ins into zebrafish genomic sites using clustered regularly interspaced palindromic repeats (CRISPR)/Cas9 reagents and single-stranded oligodeoxynucleotides. The efficiency of knock-ins was assessed by a novel application of allele-specific polymerase chain reaction and confirmed by high-throughput sequencing. Anti-sense asymmetric oligo design was found to be the most successful optimization strategy. However, cut site proximity to the mutation and phosphorothioate oligo modifications also greatly improved knock-in efficiency. A previously unrecognized risk of off-target trans knock-ins was identified that we obviated through the development of a workflow for correct knock-in detection. Together these strategies greatly facilitate the study of human genetic diseases in zebrafish, with additional applicability to enhance CRISPR-based approaches in other animal model systems.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2017-12-01
The objective of this paper is to establish a detumbling strategy and a coordination control scheme for a kinematically redundant space manipulator post-grasping a rotational satellite. First, the dynamics of the kinematically redundant space robot after grasping the target is presented, which lays the foundation for the coordination controller design. Subsequently, optimal detumbling and motion planning strategy for the post-capture phase is proposed based on the quartic Bézier curves and adaptive differential evolution (DE) algorithm subject to the specific constraints. Both detumbling time and control torques are taken into account for the generation of the optimal detumbling strategy. Furthermore, a coordination control scheme is presented to track the designed reference path while regulating the attitude of the chaser to a desired value, which successfully dumps the initial angular velocity of the rotational satellite and controls the base attitude synchronously. Simulation results are presented for detumbling a target with rotational motion using a 7 degree-of-freedom (DOF) redundant space manipulator, which demonstrates the effectiveness of the proposed method.
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
Hybrid glowworm swarm optimization for task scheduling in the cloud environment
NASA Astrophysics Data System (ADS)
Zhou, Jing; Dong, Shoubin
2018-06-01
In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
ERIC Educational Resources Information Center
Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Dorwaldt, Anne L.; Connolly, Scott W.; Ashikaga, Takamaru
2007-01-01
Mass media interventions are among the strategies recommended for youth cigarette smoking prevention, but little is known about optimal methods for reaching diverse youth audiences. Grades 4 through 12 samples of youth from four states (n = 1,230) rated smoking-prevention messages in classroom settings. Similar proportions of African American,…
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Salehi, Mojtaba
2010-01-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020
Is there an optimal preoperative management strategy for phaeochromocytoma/paraganglioma?
Challis, B G; Casey, R T; Simpson, H L; Gurnell, M
2017-02-01
Phaeochromocytomas and paragangliomas (PPGLs) are catecholamine secreting neuroendocrine tumours that predispose to haemodynamic instability. Currently, surgery is the only available curative treatment, but carries potential risks including hypertensive and hypotensive crises, cardiac arrhythmias, myocardial infarction and stroke, due to tumoral release of catecholamines during anaesthetic induction and tumour manipulation. The mortality associated with surgical resection of PPGL has significantly improved from 20-45% in the early 20th century (Apgar & Papper, AMA Archives of Surgery, 1951, 62, 634) to 0-2·9% in the early 21st century (Kinney et al. Journal of Cardiothoracic and Vascular Anesthesia, 2002, 16, 359), largely due to availability of effective pharmacological agents and advances in surgical and anaesthetic practice. However, surgical resection of PPGL still poses significant clinical management challenges. Preoperatively, alpha-adrenoceptor blockade is the mainstay of management, although various pharmacological strategies have been proposed, based largely on reports derived from retrospective data sets. To date, no consensus has been reached regarding the 'ideal' preoperative strategy due, in part, to a paucity of data from high-quality evidence-based studies comparing different treatment regimens. Here, based on the available literature, we address the Clinical Question: Is there an optimal preoperative management strategy for PPGL? © 2016 John Wiley & Sons Ltd.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.
Hogiri, Tomoharu; Tamashima, Hiroshi; Nishizawa, Akitoshi; Okamoto, Masahiro
2018-02-01
To optimize monoclonal antibody (mAb) production in Chinese hamster ovary cell cultures, culture pH should be temporally controlled with high resolution. In this study, we propose a new pH-dependent dynamic model represented by simultaneous differential equations including a minimum of six system component, depending on pH value. All kinetic parameters in the dynamic model were estimated using an evolutionary numerical optimization (real-coded genetic algorithm) method based on experimental time-course data obtained at different pH values ranging from 6.6 to 7.2. We determined an optimal pH-shift schedule theoretically. We validated this optimal pH-shift schedule experimentally and mAb production increased by approximately 40% with this schedule. Throughout this study, it was suggested that the culture pH-shift optimization strategy using a pH-dependent dynamic model is suitable to optimize any pH-shift schedule for CHO cell lines used in mAb production projects. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Optimal fault-tolerant control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2017-10-01
For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.
Strategie de commande optimale de la production electrique dans un site isole
NASA Astrophysics Data System (ADS)
Barris, Nicolas
Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.
Multicriteria approaches for a private equity fund
NASA Astrophysics Data System (ADS)
Tammer, Christiane; Tannert, Johannes
2012-09-01
We develop a new model for a Private Equity Fund based on stochastic differential equations. In order to find efficient strategies for the fund manager we formulate a multicriteria optimization problem for a Private Equity Fund. Using the e-constraint method we solve this multicriteria optimization problem. Furthermore, a genetic algorithm is applied in order to get an approximation of the efficient frontier.
A Passion for Learning: The Theory and Practice of Optimal Match at the University of Washington
ERIC Educational Resources Information Center
Noble, Kathleen D.; Childers, Sarah A.
2008-01-01
Early entrance from secondary school to university, based on the principle of optimal match, is a rare but highly effective educational strategy for many gifted students. The University of Washington offers two early entrance options for gifted adolescents: the Early Entrance Program for students prior to age 15, and the UW Academy for Young…
Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy
NASA Astrophysics Data System (ADS)
Minh, David D. L.; Adib, Artur B.
2008-05-01
An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham
2015-01-01
Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943
Assessment of Medical Risks and Optimization of their Management using Integrated Medical Model
NASA Technical Reports Server (NTRS)
Fitts, Mary A.; Madurai, Siram; Butler, Doug; Kerstman, Eric; Risin, Diana
2008-01-01
The Integrated Medical Model (IMM) Project is a software-based technique that will identify and quantify the medical needs and health risks of exploration crew members during space flight and evaluate the effectiveness of potential mitigation strategies. The IMM Project employs an evidence-based approach that will quantify probability and consequences of defined in-flight medical risks, mitigation strategies, and tactics to optimize crew member health. Using stochastic techniques, the IMM will ultimately inform decision makers at both programmatic and institutional levels and will enable objective assessment of crew health and optimization of mission success using data from relevant cohort populations and from the astronaut population. The objectives of the project include: 1) identification and documentation of conditions that may occur during exploration missions (Baseline Medical Conditions List [BMCL), 2) assessment of the likelihood of conditions in the BMCL occurring during exploration missions (incidence rate), 3) determination of the risk associated with these conditions and quantify in terms of end states (Loss of Crew, Loss of Mission, Evacuation), 4) optimization of in-flight hardware mass, volume, power, bandwidth and cost for a given level of risk or uncertainty, and .. validation of the methodologies used.
Box, Simon
2014-01-01
Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570
Box, Simon
2014-12-01
Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.
NASA Astrophysics Data System (ADS)
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
NASA Astrophysics Data System (ADS)
Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong
2018-06-01
The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.
Computer-aided diagnostic strategy selection.
Greenes, R A
1986-03-01
Determination of the optimal diagnostic work-up strategy for the patient is becoming a major concern for the practicing physician. Overlap of the indications for various diagnostic procedures, differences in their invasiveness or risk, and high costs have made physicians aware of the need to consider the choice of procedure carefully, as well as its relation to management actions available. In this article, the author discusses research approaches that aim toward development of formal decision analytic methods to allow the physician to determine optimal strategy; clinical algorithms or rules as guides to physician decisions; improved measures for characterizing the performance of diagnostic tests; educational tools for increasing the familiarity of physicians with the concepts underlying these measures and analytic procedures; and computer-based aids for facilitating the employment of these resources in actual clinical practice.
An update on the use of massive transfusion protocols in obstetrics.
Pacheco, Luis D; Saade, George R; Costantine, Maged M; Clark, Steven L; Hankins, Gary D V
2016-03-01
Obstetrical hemorrhage remains a leading cause of maternal mortality worldwide. New concepts involving the pathophysiology of hemorrhage have been described and include early activation of both the protein C and fibrinolytic pathways. New strategies in hemorrhage treatment include the use of hemostatic resuscitation, although the optimal ratio to administer the various blood products is still unknown. Massive transfusion protocols involve the early utilization of blood products and limit the traditional approach of early massive crystalloid-based resuscitation. The evidence behind hemostatic resuscitation has changed in the last few years, and debate is ongoing regarding optimal transfusion strategies. The use of tranexamic acid, fibrinogen concentrates, and prothrombin complex concentrates has emerged as new potential alternative treatment strategies with improved safety profiles. Copyright © 2016 Elsevier Inc. All rights reserved.
Parameters optimization for the energy management system of hybrid electric vehicle
NASA Astrophysics Data System (ADS)
Tseng, Chyuan-Yow; Hung, Yi-Hsuan; Tsai, Chien-Hsiung; Huang, Yu-Jen
2007-12-01
Hybrid electric vehicle (HEV) has been widely studied recently due to its high potential in reduction of fuel consumption, exhaust emission, and lower noise. Because of comprised of two power sources, the HEV requires an energy management system (EMS) to distribute optimally the power sources for various driving conditions. The ITRI in Taiwan has developed a HEV consisted of a 2.2L internal combustion engine (ICE), a 18KW motor/generator (M/G), a 288V battery pack, and a continuous variable transmission (CVT). The task of the present study is to design an energy management strategy of the EMS for the HEV. Due to the nonlinear nature and the fact of unknown system model of the system, a kind of simplex method based energy management strategy is proposed for the HEV system. The simplex method is a kind of optimization strategy which is generally used to find out the optimal parameters for un-modeled systems. The way to apply the simplex method for the design of the EMS is presented. The feasibility of the proposed method was verified by perform numerical simulation on the FTP75 drive cycles.
Simulation based optimized beam velocity in additive manufacturing
NASA Astrophysics Data System (ADS)
Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François
2017-08-01
Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.
An optimization framework for workplace charging strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yongxi; Zhou, Yan
2015-03-01
The workplace charging (WPC) has been recently recognized as the most important secondary charging point next to residential charging for plug-in electric vehicles (PEVs). The current WPC practice is spontaneous and grants every PEV a designated charger, which may not be practical or economic when there are a large number of PEVs present at workplace. This study is the first research undertaken that develops an optimization framework for WPC strategies to satisfy all charging demand while explicitly addressing different eligible levels of charging technology and employees’ demographic distributions. The optimization model is to minimize the lifetime cost of equipment, installations,more » and operations, and is formulated as an integer program. We demonstrate the applicability of the model using numerical examples based on national average data. The results indicate that the proposed optimization model can reduce the total cost of running a WPC system by up to 70% compared to the current practice. The WPC strategies are sensitive to the time windows and installation costs, and dominated by the PEV population size. The WPC has also been identified as an alternative sustainable transportation program to the public transit subsidy programs for both economic and environmental advantages.« less
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump
NASA Astrophysics Data System (ADS)
Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.
2017-04-01
In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Trophic Strategies of Unicellular Plankton.
Chakraborty, Subhendu; Nielsen, Lasse Tor; Andersen, Ken H
2017-04-01
Unicellular plankton employ trophic strategies ranging from pure photoautotrophs over mixotrophy to obligate heterotrophs (phagotrophs), with cell sizes from 10 -8 to 1 μg C. A full understanding of how trophic strategy and cell size depend on resource environment and predation is lacking. To this end, we develop and calibrate a trait-based model for unicellular planktonic organisms characterized by four traits: cell size and investments in phototrophy, nutrient uptake, and phagotrophy. We use the model to predict how optimal trophic strategies depend on cell size under various environmental conditions, including seasonal succession. We identify two mixotrophic strategies: generalist mixotrophs investing in all three investment traits and obligate mixotrophs investing only in phototrophy and phagotrophy. We formulate two conjectures: (1) most cells are limited by organic carbon; however, small unicellulars are colimited by organic carbon and nutrients, and only large photoautotrophs and smaller mixotrophs are nutrient limited; (2) trophic strategy is bottom-up selected by the environment, while optimal size is top-down selected by predation. The focus on cell size and trophic strategies facilitates general insights into the strategies of a broad class of organisms in the size range from micrometers to millimeters that dominate the primary and secondary production of the world's oceans.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal
NASA Astrophysics Data System (ADS)
Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi
2017-06-01
Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.
Dynamic malware containment under an epidemic model with alert
NASA Astrophysics Data System (ADS)
Zhang, Tianrui; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan
2017-03-01
Alerting at the early stage of malware invasion turns out to be an important complement to malware detection and elimination. This paper addresses the issue of how to dynamically contain the prevalence of malware at a lower cost, provided alerting is feasible. A controlled epidemic model with alert is established, and an optimal control problem based on the epidemic model is formulated. The optimality system for the optimal control problem is derived. The structure of an optimal control for the proposed optimal control problem is characterized under some conditions. Numerical examples show that the cost-efficiency of an optimal control strategy can be enhanced by adjusting the upper and lower bounds on admissible controls.
Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization
NASA Astrophysics Data System (ADS)
Alekseev, G. V.
2018-04-01
For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.
Research reactor loading pattern optimization using estimation of distribution algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S.; Ziver, K.; AMCG Group, RM Consultants, Abingdon
2006-07-01
A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristicmore » Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)« less
OPTIMIZATION OF INTEGRATED URBAN WET-WEATHER CONTROL STRATEGIES
An optimization method for urban wet weather control (WWC) strategies is presented. The developed optimization model can be used to determine the most cost-effective strategies for the combination of centralized storage-release systems and distributed on-site WWC alternatives. T...
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Energy-saving management modelling and optimization for lead-acid battery formation process
NASA Astrophysics Data System (ADS)
Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.
2017-11-01
In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.
Optimisation of strain selection in evolutionary continuous culture
NASA Astrophysics Data System (ADS)
Bayen, T.; Mairet, F.
2017-12-01
In this work, we study a minimal time control problem for a perfectly mixed continuous culture with n ≥ 2 species and one limiting resource. The model that we consider includes a mutation factor for the microorganisms. Our aim is to provide optimal feedback control laws to optimise the selection of the species of interest. Thanks to Pontryagin's Principle, we derive optimality conditions on optimal controls and introduce a sub-optimal control law based on a most rapid approach to a singular arc that depends on the initial condition. Using adaptive dynamics theory, we also study a simplified version of this model which allows to introduce a near optimal strategy.
Intelligent reservoir operation system based on evolving artificial neural networks
NASA Astrophysics Data System (ADS)
Chaves, Paulo; Chang, Fi-John
2008-06-01
We propose a novel intelligent reservoir operation system based on an evolving artificial neural network (ANN). Evolving means the parameters of the ANN model are identified by the GA evolutionary optimization technique. Accordingly, the ANN model should represent the operational strategies of reservoir operation. The main advantages of the Evolving ANN Intelligent System (ENNIS) are as follows: (i) only a small number of parameters to be optimized even for long optimization horizons, (ii) easy to handle multiple decision variables, and (iii) the straightforward combination of the operation model with other prediction models. The developed intelligent system was applied to the operation of the Shihmen Reservoir in North Taiwan, to investigate its applicability and practicability. The proposed method is first built to a simple formulation for the operation of the Shihmen Reservoir, with single objective and single decision. Its results were compared to those obtained by dynamic programming. The constructed network proved to be a good operational strategy. The method was then built and applied to the reservoir with multiple (five) decision variables. The results demonstrated that the developed evolving neural networks improved the operation performance of the reservoir when compared to its current operational strategy. The system was capable of successfully simultaneously handling various decision variables and provided reasonable and suitable decisions.
Progression to multi-scale models and the application to food system intervention strategies.
Gröhn, Yrjö T
2015-02-01
The aim of this article is to discuss how the systems science approach can be used to optimize intervention strategies in food animal systems. It advocates the idea that the challenges of maintaining a safe food supply are best addressed by integrating modeling and mathematics with biological studies critical to formulation of public policy to address these challenges. Much information on the biology and epidemiology of food animal systems has been characterized through single-discipline methods, but until now this information has not been thoroughly utilized in a fully integrated manner. The examples are drawn from our current research. The first, explained in depth, uses clinical mastitis to introduce the concept of dynamic programming to optimize management decisions in dairy cows (also introducing the curse of dimensionality problem). In the second example, a compartmental epidemic model for Johne's disease with different intervention strategies is optimized. The goal of the optimization strategy depends on whether there is a relationship between Johne's and Crohn's disease. If so, optimization is based on eradication of infection; if not, it is based on the cow's performance only (i.e., economic optimization, similar to the mastitis example). The third example focuses on food safety to introduce risk assessment using Listeria monocytogenes and Salmonella Typhimurium. The last example, practical interventions to effectively manage antibiotic resistance in beef and dairy cattle systems, introduces meta-population modeling that accounts for bacterial growth not only in the host (cow), but also in the cow's feed, drinking water and the housing environment. Each example stresses the need to progress toward multi-scale modeling. The article ends with examples of multi-scale systems, from food supply systems to Johne's disease. Reducing the consequences of foodborne illnesses (i.e., minimizing disease occurrence and associated costs) can only occur through an understanding of the system as a whole, including all its complexities. Thus the goal of future research should be to merge disciplines such as molecular biology, applied mathematics and social sciences to gain a better understanding of complex systems such as the food supply chain. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda
2014-05-01
Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.
Computing border bases using mutant strategies
NASA Astrophysics Data System (ADS)
Ullah, E.; Abbas Khan, S.
2014-01-01
Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.
Guthier, Christian V; Damato, Antonio L; Hesser, Juergen W; Viswanathan, Akila N; Cormack, Robert A
2017-12-01
Interstitial high-dose rate (HDR) brachytherapy is an important therapeutic strategy for the treatment of locally advanced gynecologic (GYN) cancers. The outcome of this therapy is determined by the quality of dose distribution achieved. This paper focuses on a novel yet simple heuristic for catheter selection for GYN HDR brachytherapy and their comparison against state of the art optimization strategies. The proposed technique is intended to act as a decision-supporting tool to select a favorable needle configuration. The presented heuristic for catheter optimization is based on a shrinkage-type algorithm (SACO). It is compared against state of the art planning in a retrospective study of 20 patients who previously received image-guided interstitial HDR brachytherapy using a Syed Neblett template. From those plans, template orientation and position are estimated via a rigid registration of the template with the actual catheter trajectories. All potential straight trajectories intersecting the contoured clinical target volume (CTV) are considered for catheter optimization. Retrospectively generated plans and clinical plans are compared with respect to dosimetric performance and optimization time. All plans were generated with one single run of the optimizer lasting 0.6-97.4 s. Compared to manual optimization, SACO yields a statistically significant (P ≤ 0.05) improved target coverage while at the same time fulfilling all dosimetric constraints for organs at risk (OARs). Comparing inverse planning strategies, dosimetric evaluation for SACO and "hybrid inverse planning and optimization" (HIPO), as gold standard, shows no statistically significant difference (P > 0.05). However, SACO provides the potential to reduce the number of used catheters without compromising plan quality. The proposed heuristic for needle selection provides fast catheter selection with optimization times suited for intraoperative treatment planning. Compared to manual optimization, the proposed methodology results in fewer catheters without a clinically significant loss in plan quality. The proposed approach can be used as a decision support tool that guides the user to find the ideal number and configuration of catheters. © 2017 American Association of Physicists in Medicine.
A computational approach to compare regression modelling strategies in prediction research.
Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H
2016-08-25
It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.
The optimal location of piezoelectric actuators and sensors for vibration control of plates
NASA Astrophysics Data System (ADS)
Kumar, K. Ramesh; Narayanan, S.
2007-12-01
This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.
Nonstationary decision model for flood risk decision scaling
NASA Astrophysics Data System (ADS)
Spence, Caitlin M.; Brown, Casey M.
2016-11-01
Hydroclimatic stationarity is increasingly questioned as a default assumption in flood risk management (FRM), but successor methods are not yet established. Some potential successors depend on estimates of future flood quantiles, but methods for estimating future design storms are subject to high levels of uncertainty. Here we apply a Nonstationary Decision Model (NDM) to flood risk planning within the decision scaling framework. The NDM combines a nonstationary probability distribution of annual peak flow with optimal selection of flood management alternatives using robustness measures. The NDM incorporates structural and nonstructural FRM interventions and valuation of flows supporting ecosystem services to calculate expected cost of a given FRM strategy. A search for the minimum-cost strategy under incrementally varied representative scenarios extending across the plausible range of flood trend and value of the natural flow regime discovers candidate FRM strategies that are evaluated and compared through a decision scaling analysis (DSA). The DSA selects a management strategy that is optimal or close to optimal across the broadest range of scenarios or across the set of scenarios deemed most likely to occur according to estimates of future flood hazard. We illustrate the decision framework using a stylized example flood management decision based on the Iowa City flood management system, which has experienced recent unprecedented high flow episodes. The DSA indicates a preference for combining infrastructural and nonstructural adaptation measures to manage flood risk and makes clear that options-based approaches cannot be assumed to be "no" or "low regret."
Fractal Profit Landscape of the Stock Market
Grönlund, Andreas; Yi, Il Gu; Kim, Beom Jun
2012-01-01
We investigate the structure of the profit landscape obtained from the most basic, fluctuation based, trading strategy applied for the daily stock price data. The strategy is parameterized by only two variables, p and q Stocks are sold and bought if the log return is bigger than p and less than –q, respectively. Repetition of this simple strategy for a long time gives the profit defined in the underlying two-dimensional parameter space of p and q. It is revealed that the local maxima in the profit landscape are spread in the form of a fractal structure. The fractal structure implies that successful strategies are not localized to any region of the profit landscape and are neither spaced evenly throughout the profit landscape, which makes the optimization notoriously hard and hypersensitive for partial or limited information. The concrete implication of this property is demonstrated by showing that optimization of one stock for future values or other stocks renders worse profit than a strategy that ignores fluctuations, i.e., a long-term buy-and-hold strategy. PMID:22558079
NASA Astrophysics Data System (ADS)
Cohen, J. S.; McGarity, A. E.
2017-12-01
The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land use types.
Renton, Michael
2011-01-01
Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477
NASA Astrophysics Data System (ADS)
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.
Cost-Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.
Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris
2017-12-27
Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared to the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon microsimulation model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess whether optimal screening strategies would change. Colonoscopy every 5 years, starting at age 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in population is not clear. Using a Microsimulation Screening Analysis-Colon microsimulation model, we found screening of patients with cystic fibrosis for CRC to be cost-effective. Due to the higher risk in these patients for CRC, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
Cost Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.
Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris
2018-02-01
Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared with the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess if optimal screening strategies would change. Colonoscopy every 5 years, starting at an age of 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in this population is not clear. Using a Microsimulation Screening Analysis-Colon model, we found screening of patients with cystic fibrosis for CRC to be cost effective. Because of the higher risk of CRC in these patients, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daley, R.; Ahdieh, N.; Bentley, J.
2014-01-01
A comprehensive Federal Fleet Management Handbook that builds upon the "Guidance for Federal Agencies on E.O. 13514 Section 12-Federal Fleet Management" and provides information to help fleet managers select optimal greenhouse gas and petroleum reduction strategies for each location, meeting or exceeding related fleet requirements, acquiring vehicles to support these strategies while minimizing fleet size and vehicle miles traveled, and refining strategies based on agency performance.
Timing Game-Based Practice in a Reading Comprehension Strategy Tutor
ERIC Educational Resources Information Center
Jacovina, Matthew E.; Jackson, G. Tanner; Snow, Erica L.; McNamara, Danielle S.
2016-01-01
Game-based practice within Intelligent Tutoring Systems (ITSs) can be optimized by examining how properties of practice activities influence learning outcomes and motivation. In the current study, we manipulated when game-based practice was available to students. All students (n = 149) first completed lesson videos in iSTART-2, an ITS focusing on…
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Guerdal, Z.
1992-01-01
An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.
Slaughter, Susan E; Bampton, Erin; Erin, Daniel F; Ickert, Carla; Jones, C Allyson; Estabrooks, Carole A
2017-06-01
Innovative approaches are required to facilitate the adoption and sustainability of evidence-based care practices. We propose a novel implementation strategy, a peer reminder role, which involves offering a brief formal reminder to peers during structured unit meetings. This study aims to (a) identify healthcare aide (HCA) perceptions of a peer reminder role for HCAs, and (b) develop a conceptual framework for the role based on these perceptions. In 2013, a qualitative focus group study was conducted in five purposively sampled residential care facilities in western Canada. A convenience sample of 24 HCAs agreed to participate in five focus groups. Concurrent with data collection, two researchers coded the transcripts and identified themes by consensus. They jointly determined when saturation was achieved and took steps to optimize the trustworthiness of the findings. Five HCAs from the original focus groups commented on the resulting conceptual framework. HCAs were cautious about accepting a role that might alienate them from their co-workers. They emphasized feeling comfortable with the peer reminder role and identified circumstances that would optimize their comfort including: effective implementation strategies, perceptions of the role, role credibility and a supportive context. These intersecting themes formed a peer reminder conceptual framework. We identified HCAs' perspectives of a new peer reminder role designed specifically for them. Based on their perceptions, a conceptual framework was developed to guide the implementation of a peer reminder role for HCAs. This role may be a strategic implementation strategy to optimize the sustainability of new practices in residential care settings, and the related framework could offer guidance on how to implement this role. © 2017 Sigma Theta Tau International.
NASA Astrophysics Data System (ADS)
Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao
2012-11-01
Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.
Cross layer optimization for cloud-based radio over optical fiber networks
NASA Astrophysics Data System (ADS)
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong; Yang, Hui; Meng, Luoming
2016-07-01
To adapt the 5G communication, the cloud radio access network is a paradigm introduced by operators which aggregates all base stations computational resources into a cloud BBU pool. The interaction between RRH and BBU or resource schedule among BBUs in cloud have become more frequent and complex with the development of system scale and user requirement. It can promote the networking demand among RRHs and BBUs, and force to form elastic optical fiber switching and networking. In such network, multiple stratum resources of radio, optical and BBU processing unit have interweaved with each other. In this paper, we propose a novel multiple stratum optimization (MSO) architecture for cloud-based radio over optical fiber networks (C-RoFN) with software defined networking. Additionally, a global evaluation strategy (GES) is introduced in the proposed architecture. MSO can enhance the responsiveness to end-to-end user demands and globally optimize radio frequency, optical spectrum and BBU processing resources effectively to maximize radio coverage. The feasibility and efficiency of the proposed architecture with GES strategy are experimentally verified on OpenFlow-enabled testbed in terms of resource occupation and path provisioning latency.
Design optimization of PVDF-based piezoelectric energy harvesters.
Song, Jundong; Zhao, Guanxing; Li, Bo; Wang, Jin
2017-09-01
Energy harvesting is a promising technology that powers the electronic devices via scavenging the ambient energy. Piezoelectric energy harvesters have attracted considerable interest for their high conversion efficiency and easy fabrication in minimized sensors and transducers. To improve the output capability of energy harvesters, properties of piezoelectric materials is an influential factor, but the potential of the material is less likely to be fully exploited without an optimized configuration. In this paper, an optimization strategy for PVDF-based cantilever-type energy harvesters is proposed to achieve the highest output power density with the given frequency and acceleration of the vibration source. It is shown that the maximum power output density only depends on the maximum allowable stress of the beam and the working frequency of the device, and these two factors can be obtained by adjusting the geometry of piezoelectric layers. The strategy is validated by coupled finite-element-circuit simulation and a practical device. The fabricated device within a volume of 13.1 mm 3 shows an output power of 112.8 μW which is comparable to that of the best-performing piezoceramic-based energy harvesters within the similar volume reported so far.
Review of design optimization methods for turbomachinery aerodynamics
NASA Astrophysics Data System (ADS)
Li, Zhihui; Zheng, Xinqian
2017-08-01
In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
Mandic, Radivoj; Knezevic, Olivera M; Mirkov, Dragan M; Jaric, Slobodan
2016-09-01
The aim of the present study was to explore the control strategy of maximum countermovement jumps regarding the preferred countermovement depth preceding the concentric jump phase. Elite basketball players and physically active non-athletes were tested on the jumps performed with and without an arm swing, while the countermovement depth was varied within the interval of almost 30 cm around its preferred value. The results consistently revealed 5.1-11.2 cm smaller countermovement depth than the optimum one, but the same difference was more prominent in non-athletes. In addition, although the same differences revealed a marked effect on the recorded force and power output, they reduced jump height for only 0.1-1.2 cm. Therefore, the studied control strategy may not be based solely on the countermovement depth that maximizes jump height. In addition, the comparison of the two groups does not support the concept of a dual-task strategy based on the trade-off between maximizing jump height and minimizing the jumping quickness that should be more prominent in the athletes that routinely need to jump quickly. Further research could explore whether the observed phenomenon is based on other optimization principles, such as the minimization of effort and energy expenditure. Nevertheless, future routine testing procedures should take into account that the control strategy of maximum countermovement jumps is not fully based on maximizing the jump height, while the countermovement depth markedly confound the relationship between the jump height and the assessed force and power output of leg muscles.
Self-deployable mobile sensor networks for on-demand surveillance
NASA Astrophysics Data System (ADS)
Miao, Lidan; Qi, Hairong; Wang, Feiyi
2005-05-01
This paper studies two interconnected problems in mobile sensor network deployment, the optimal placement of heterogeneous mobile sensor platforms for cost-efficient and reliable coverage purposes, and the self-organizable deployment. We first develop an optimal placement algorithm based on a "mosaicked technology" such that different types of mobile sensors form a mosaicked pattern uniquely determined by the popularity of different types of sensor nodes. The initial state is assumed to be random. In order to converge to the optimal state, we investigate the swarm intelligence (SI)-based sensor movement strategy, through which the randomly deployed sensors can self-organize themselves to reach the optimal placement state. The proposed algorithm is compared with the random movement and the centralized method using performance metrics such as network coverage, convergence time, and energy consumption. Simulation results are presented to demonstrate the effectiveness of the mosaic placement and the SI-based movement.
Optimal house elevation for reducing flood-related losses
NASA Astrophysics Data System (ADS)
Xian, Siyuan; Lin, Ning; Kunreuther, Howard
2017-05-01
FEMA recommends that houses in coastal flood zones be elevated to at least 1 foot above the base flood elevation (BFE). However, this guideline is not specific and ignores characteristics of houses that affect their vulnerability. An economically optimal elevation level (OEL) is proposed that minimizes the combined cost of elevation and cumulative insurance premiums over the lifespan of the house. As an illustration, analysis is performed for various coastal houses in Ortley Beach, NJ. Compared with the strategy of raising houses to 1 foot above BFE, the strategy of raising houses to their OELs is much more economical for the homeowners. Elevating to the OELs also significantly reduces government spending on subsidizing low-income homeowners through, for example, a voucher program, to mitigate flood risk. These results suggest that policy makers should consider vulnerability factors in developing risk-reduction strategies. FEMA may recommend OELs to homeowners based on their flood hazards as well as house characteristics or at least providing more information and tools to homeowners to assist them in making more economical decisions. The OEL strategy can also be coupled with a voucher program to make the program more cost-effective.
NASA Astrophysics Data System (ADS)
Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei
2018-01-01
This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.
NASA Astrophysics Data System (ADS)
Schöttl, Peter; Bern, Gregor; van Rooyen, De Wet; Heimsath, Anna; Fluri, Thomas; Nitz, Peter
2017-06-01
A transient simulation methodology for cavity receivers for Solar Tower Central Receiver Systems with molten salt as heat transfer fluid is described. Absorbed solar radiation is modeled with ray tracing and a sky discretization approach to reduce computational effort. Solar radiation re-distribution in the cavity as well as thermal radiation exchange are modeled based on view factors, which are also calculated with ray tracing. An analytical approach is used to represent convective heat transfer in the cavity. Heat transfer fluid flow is simulated with a discrete tube model, where the boundary conditions at the outer tube surface mainly depend on inputs from the previously mentioned modeling aspects. A specific focus is put on the integration of optical and thermo-hydraulic models. Furthermore, aiming point and control strategies are described, which are used during the transient performance assessment. Eventually, the developed simulation methodology is used for the optimization of the aperture opening size of a PS10-like reference scenario with cavity receiver and heliostat field. The objective function is based on the cumulative gain of one representative day. Results include optimized aperture opening size, transient receiver characteristics and benefits of the implemented aiming point strategy compared to a single aiming point approach. Future work will include annual simulations, cost assessment and optimization of a larger range of receiver parameters.
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.
2014-05-01
The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
Research of converter transformer fault diagnosis based on improved PSO-BP algorithm
NASA Astrophysics Data System (ADS)
Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping
2017-09-01
To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.
Windsor, Liliane Cambraia; Benoit, Ellen; Smith, Douglas; Pinto, Rogério M; Kugler, Kari C
2018-04-27
Rates of alcohol and illicit drug use (AIDU) are consistently similar across racial groups (Windsor and Negi, J Addict Dis 28:258-68, 2009; Keyes et al. Soc Sci Med 124:132-41, 2015). Yet AIDU has significantly higher consequences for residents in distressed communities with concentrations of African Americans (DCAA - i.e., localities with high rates of poverty and crime) who also have considerably less access to effective treatment of substance use disorders (SUD). This project is optimizing Community Wise, an innovative multi-level behavioral-health intervention created in partnership with service providers and residents of distressed communities with histories of SUD and incarceration, to reduce health inequalities related to AIDU. Grounded in critical consciousness theory, community-based participatory research principles (CBPR), and the multiphase optimization strategy (MOST), this study employs a 2 × 2 × 2 × 2 factorial design to engineer the most efficient, effective, and scalable version of Community Wise that can be delivered for US$250 per person or less. This study is fully powered to detect change in AIDU in a sample of 528 men with a histories of SUD and incarceration, residing in Newark, NJ in the United States. A community collaborative board oversees recruitment using a variety of strategies including indigenous field worker sampling, facility-based sampling, community advertisement through fliers, and street outreach. Participants are randomly assigned to one of 16 conditions that include a combination of the following candidate intervention components: peer or licensed facilitator, group dialogue, personal goal development, and community organizing. All participants receive a core critical-thinking component. Data are collected at baseline plus five post-baseline monthly follow ups. Once the optimized Community Wise intervention is identified, it will be evaluated against an existing standard of care in a future randomized clinical trial. This paper describes the protocol of the first ever study using CBPR and MOST to optimize a substance use intervention targeting a marginalized population. Data from this study will culminate in an optimized Community Wise manual; enhanced methodological strategies to develop multi-component scalable interventions using MOST and CBPR; and a better understanding of the application of critical consciousness theory to the field of health inequalities related to AIDU. ClinicalTrials.gov, NCT02951455 . Registered on 1 November 2016.
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
Flight style optimization in ski jumping on normal, large, and ski flying hills.
Jung, Alexander; Staat, Manfred; Müller, Wolfram
2014-02-07
In V-style ski jumping, aerodynamic forces are predominant performance factors and athletes have to solve difficult optimization problems in parts of a second in order to obtain their jump length maximum and to keep the flight stable. Here, a comprehensive set of wind tunnel data was used for optimization studies based on Pontryagin's minimum principle with both the angle of attack α and the body-ski angle β as controls. Various combinations of the constraints αmax and βmin(t) were analyzed in order to compare different optimization strategies. For the computer simulation studies, the Olympic hill profiles in Esto-Sadok, Russia (HS 106m, HS 140m), and in Harrachov, Czech Republic, host of the Ski Flying World Championships 2014 (HS 205m) were used. It is of high importance for ski jumping practice that various aerodynamic strategies, i.e. combinations of α- and β-time courses, can lead to similar jump lengths which enables athletes to win competitions using individual aerodynamic strategies. Optimization results also show that aerodynamic behavior has to be different at different hill sizes (HS). Optimized time courses of α and β using reduced drag and lift areas in order to mimic recent equipment regulations differed only in a negligible way. This indicates that optimization results presented here are not very sensitive to minor changes of the aerodynamic equipment features when similar jump length are obtained by using adequately higher in-run velocities. However, wind tunnel measurements with athletes including take-off and transition to stabilized flight, flight, and landing behavior would enable a more detailed understanding of individual flight style optimization. © 2013 Published by Elsevier Ltd.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
RTDS implementation of an improved sliding mode based inverter controller for PV system.
Islam, Gazi; Muyeen, S M; Al-Durra, Ahmed; Hasanien, Hany M
2016-05-01
This paper proposes a novel approach for testing dynamics and control aspects of a large scale photovoltaic (PV) system in real time along with resolving design hindrances of controller parameters using Real Time Digital Simulator (RTDS). In general, the harmonic profile of a fast controller has wide distribution due to the large bandwidth of the controller. The major contribution of this paper is that the proposed control strategy gives an improved voltage harmonic profile and distribute it more around the switching frequency along with fast transient response; filter design, thus, becomes easier. The implementation of a control strategy with high bandwidth in small time steps of Real Time Digital Simulator (RTDS) is not straight forward. This paper shows a good methodology for the practitioners to implement such control scheme in RTDS. As a part of the industrial process, the controller parameters are optimized using particle swarm optimization (PSO) technique to improve the low voltage ride through (LVRT) performance under network disturbance. The response surface methodology (RSM) is well adapted to build analytical models for recovery time (Rt), maximum percentage overshoot (MPOS), settling time (Ts), and steady state error (Ess) of the voltage profile immediate after inverter under disturbance. A systematic approach of controller parameter optimization is detailed. The transient performance of the PSO based optimization method applied to the proposed sliding mode controlled PV inverter is compared with the results from genetic algorithm (GA) based optimization technique. The reported real time implementation challenges and controller optimization procedure are applicable to other control applications in the field of renewable and distributed generation systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
IDH mutation assessment of glioma using texture features of multimodal MR images
NASA Astrophysics Data System (ADS)
Zhang, Xi; Tian, Qiang; Wu, Yu-Xia; Xu, Xiao-Pan; Li, Bao-Juan; Liu, Yi-Xiong; Liu, Yang; Lu, Hong-Bing
2017-03-01
Purpose: To 1) find effective texture features from multimodal MRI that can distinguish IDH mutant and wild status, and 2) propose a radiomic strategy for preoperatively detecting IDH mutation patients with glioma. Materials and Methods: 152 patients with glioma were retrospectively included from the Cancer Genome Atlas. Corresponding T1-weighted image before- and post-contrast, T2-weighted image and fluid-attenuation inversion recovery image from the Cancer Imaging Archive were analyzed. Specific statistical tests were applied to analyze the different kind of baseline information of LrGG patients. Finally, 168 texture features were derived from multimodal MRI per patient. Then the support vector machine-based recursive feature elimination (SVM-RFE) and classification strategy was adopted to find the optimal feature subset and build the identification models for detecting the IDH mutation. Results: Among 152 patients, 92 and 60 were confirmed to be IDH-wild and mutant, respectively. Statistical analysis showed that the patients without IDH mutation was significant older than patients with IDH mutation (p<0.01), and the distribution of some histological subtypes was significant different between IDH wild and mutant groups (p<0.01). After SVM-RFE, 15 optimal features were determined for IDH mutation detection. The accuracy, sensitivity, specificity, and AUC after SVM-RFE and parameter optimization were 82.2%, 85.0%, 78.3%, and 0.841, respectively. Conclusion: This study presented a radiomic strategy for noninvasively discriminating IDH mutation of patients with glioma. It effectively incorporated kinds of texture features from multimodal MRI, and SVM-based classification strategy. Results suggested that features selected from SVM-RFE were more potential to identifying IDH mutation. The proposed radiomics strategy could facilitate the clinical decision making in patients with glioma.
SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy
Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui
2014-01-01
Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063
Detection of MDR1 mRNA expression with optimized gold nanoparticle beacon
NASA Astrophysics Data System (ADS)
Zhou, Qiumei; Qian, Zhiyu; Gu, Yueqing
2016-03-01
MDR1 (multidrug resistance gene) mRNA expression is a promising biomarker for the prediction of doxorubicin resistance in clinic. However, the traditional technical process in clinic is complicated and cannot perform the real-time detection mRNA in living single cells. In this study, the expression of MDR1 mRNA was analyzed based on optimized gold nanoparticle beacon in tumor cells. Firstly, gold nanoparticle (AuNP) was modified by thiol-PEG, and the MDR1 beacon sequence was screened and optimized using a BLAST bioinformatics strategy. Then, optimized MDR1 molecular beacons were characterized by transmission electron microscope, UV-vis and fluorescence spectroscopies. The cytotoxicity of MDR1 molecular beacon on L-02, K562 and K562/Adr cells were investigated by MTT assay, suggesting that MDR1 molecular beacon was low inherent cytotoxicity. Dark field microscope was used to investigate the cellular uptake of hDAuNP beacon assisted with ultrasound. Finally, laser scanning confocal microscope images showed that there was a significant difference in MDR1 mRNA expression in K562 and K562/Adr cells, which was consistent with the results of q-PCR measurement. In summary, optimized MDR1 molecular beacon designed in this study is a reliable strategy for detection MDR1 mRNA expression in living tumor cells, and will be a promising strategy for in guiding patient treatment and management in individualized medication.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Adjacency Matrix-Based Transmit Power Allocation Strategies in Wireless Sensor Networks
Consolini, Luca; Medagliani, Paolo; Ferrari, Gianluigi
2009-01-01
In this paper, we present an innovative transmit power control scheme, based on optimization theory, for wireless sensor networks (WSNs) which use carrier sense multiple access (CSMA) with collision avoidance (CA) as medium access control (MAC) protocol. In particular, we focus on schemes where several remote nodes send data directly to a common access point (AP). Under the assumption of finite overall network transmit power and low traffic load, we derive the optimal transmit power allocation strategy that minimizes the packet error rate (PER) at the AP. This approach is based on modeling the CSMA/CA MAC protocol through a finite state machine and takes into account the network adjacency matrix, depending on the transmit power distribution and determining the network connectivity. It will be then shown that the transmit power allocation problem reduces to a convex constrained minimization problem. Our results show that, under the assumption of low traffic load, the power allocation strategy, which guarantees minimal delay, requires the maximization of network connectivity, which can be equivalently interpreted as the maximization of the number of non-zero entries of the adjacency matrix. The obtained theoretical results are confirmed by simulations for unslotted Zigbee WSNs. PMID:22346705
Mixed-Strategy Chance Constrained Optimal Control
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
Steyer, Benjamin; Carlson-Stevermer, Jared; Angenent-Mari, Nicolas; Khalil, Andrew; Harkness, Ty; Saha, Krishanu
2016-04-01
Non-viral gene-editing of human cells using the CRISPR-Cas9 system requires optimized delivery of multiple components. Both the Cas9 endonuclease and a single guide RNA, that defines the genomic target, need to be present and co-localized within the nucleus for efficient gene-editing to occur. This work describes a new high-throughput screening platform for the optimization of CRISPR-Cas9 delivery strategies. By exploiting high content image analysis and microcontact printed plates, multi-parametric gene-editing outcome data from hundreds to thousands of isolated cell populations can be screened simultaneously. Employing this platform, we systematically screened four commercially available cationic lipid transfection materials with a range of RNAs encoding the CRISPR-Cas9 system. Analysis of Cas9 expression and editing of a fluorescent mCherry reporter transgene within human embryonic kidney cells was monitored over several days after transfection. Design of experiments analysis enabled rigorous evaluation of delivery materials and RNA concentration conditions. The results of this analysis indicated that the concentration and identity of transfection material have significantly greater effect on gene-editing than ratio or total amount of RNA. Cell subpopulation analysis on microcontact printed plates, further revealed that low cell number and high Cas9 expression, 24h after CRISPR-Cas9 delivery, were strong predictors of gene-editing outcomes. These results suggest design principles for the development of materials and transfection strategies with lipid-based materials. This platform could be applied to rapidly optimize materials for gene-editing in a variety of cell/tissue types in order to advance genomic medicine, regenerative biology and drug discovery. CRISPR-Cas9 is a new gene-editing technology for "genome surgery" that is anticipated to treat genetic diseases. This technology uses multiple components of the Cas9 system to cut out disease-causing mutations in the human genome and precisely suture in therapeutic sequences. Biomaterials based delivery strategies could help transition these technologies to the clinic. The design space for materials based delivery strategies is vast and optimization is essential to ensuring the safety and efficacy of these treatments. Therefore, new methods are required to rapidly and systematically screen gene-editing efficacy in human cells. This work utilizes an innovative platform to generate and screen many formulations of synthetic biomaterials and components of the CRISPR-Cas9 system in parallel. On this platform, we watch genome surgery in action using high content image analysis. These capabilities enabled us to identify formulation parameters for Cas9-material complexes that can optimize gene-editing in a specific human cell type. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Power plant maintenance scheduling using ant colony optimization: an improved formulation
NASA Astrophysics Data System (ADS)
Foong, Wai Kuan; Maier, Holger; Simpson, Angus
2008-04-01
It is common practice in the hydropower industry to either shorten the maintenance duration or to postpone maintenance tasks in a hydropower system when there is expected unserved energy based on current water storage levels and forecast storage inflows. It is therefore essential that a maintenance scheduling optimizer can incorporate the options of shortening the maintenance duration and/or deferring maintenance tasks in the search for practical maintenance schedules. In this article, an improved ant colony optimization-power plant maintenance scheduling optimization (ACO-PPMSO) formulation that considers such options in the optimization process is introduced. As a result, both the optimum commencement time and the optimum outage duration are determined for each of the maintenance tasks that need to be scheduled. In addition, a local search strategy is presented in this article to boost the robustness of the algorithm. When tested on a five-station hydropower system problem, the improved formulation is shown to be capable of allowing shortening of maintenance duration in the event of expected demand shortfalls. In addition, the new local search strategy is also shown to have significantly improved the optimization ability of the ACO-PPMSO algorithm.
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-07-14
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle's position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption.
NASA Astrophysics Data System (ADS)
Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay
2015-01-01
In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.
Analysis of Product Distribution Strategy in Digital Publishing Industry Based on Game-Theory
NASA Astrophysics Data System (ADS)
Xu, Li-ping; Chen, Haiyan
2017-04-01
The digital publishing output increased significantly year by year. It has been the most vigorous point of economic growth and has been more important to press and publication industry. Its distribution channel has been diversified, which is different from the traditional industry. A deep research has been done in digital publishing industry, for making clear of the constitution of the industry chain and establishing the model of industry chain. The cooperative and competitive relationship between different distribution channels have been analyzed basing on a game-theory. By comparing the distribution quantity and the market size between the static distribution strategy and dynamic distribution strategy, we get the theory evidence about how to choose the distribution strategy to get the optimal benefit.
Optimal vaccination strategies and rational behaviour in seasonal epidemics.
Doutor, Paulo; Rodrigues, Paula; Soares, Maria do Céu; Chalub, Fabio A C C
2016-12-01
We consider a SIRS model with time dependent transmission rate. We assume time dependent vaccination which confers the same immunity as natural infection. We study two types of vaccination strategies: (i) optimal vaccination, in the sense that it minimizes the effort of vaccination in the set of vaccination strategies for which, for any sufficiently small perturbation of the disease free state, the number of infectious individuals is monotonically decreasing; (ii) Nash-equilibria strategies where all individuals simultaneously minimize the joint risk of vaccination versus the risk of the disease. The former case corresponds to an optimal solution for mandatory vaccinations, while the second corresponds to the equilibrium to be expected if vaccination is fully voluntary. We are able to show the existence of both optimal and Nash strategies in a general setting. In general, these strategies will not be functions but Radon measures. For specific forms of the transmission rate, we provide explicit formulas for the optimal and the Nash vaccination strategies.
Optimal Keno Strategies and the Central Limit Theorem
ERIC Educational Resources Information Center
Johnson, Roger W.
2006-01-01
For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…
Albini, Fabio; Xiaoqiu Liu; Torlasco, Camilla; Soranna, Davide; Faini, Andrea; Ciminaghi, Renata; Celsi, Ada; Benedetti, Matteo; Zambon, Antonella; di Rienzo, Marco; Parati, Gianfranco
2016-08-01
Uncontrolled hypertension is largely attributed to unsatisfactory doctor's engagement in its optimal management and to poor patients' compliance to therapeutic interventions. ICT and mobile Health solutions might improve these conditions, being widely available and providing highly effective communication strategies. To evaluate whether ICT and mobile Health tools are able to improve hypertension control by improving doctors' engagement and by increasing patients' education and involvement, and their compliance to lifestyle modification and prescribed drug therapy. In a pilot study, we have included 690 treated hypertensive patients with uncontrolled office blood pressure (BP), consecutively recruited by 9 general practitioners over 3 months. Patients were alternatively assigned to routine management based on repeated office visits or to an integrated ICT-based Patients Optimal Strategy for Treatment (POST) system including Home BP monitoring teletransmission, a dedicated web-based platform for patients' management by physicians (Misuriamo platform), and a smartphone mobile application (Eurohypertension APP, E-APP), over a follow-up of 6 months. BP values, demographic and clinical data were collected at baseline and at all follow-up visits (at least two). BP control and cardiovascular risk level have been evaluated at the beginning and at the end of the study. 89 patients did not complete the follow-up, thus data analysis was carried out in 601 of them (303 patients in the POST group and 298 in the control group). Office BP control (<;149/90 mmHg) was 40.0% in control group, and 72.3% in POST group at 6 month follow-up. At the same time Home BP control (<;135/85 mmHg average of 6 days) in POST group was 87.5%. this pilot study suggests that ICT based tools might be effective in improving hypertension management, implementing positive patients' involvement with better adherence to treatment prescriptions and providing the physicians with dynamic control of patients' home BP measurements, resulting in lesser clinical inertia.
Emergence of an optimal search strategy from a simple random walk
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-01-01
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445
Emergence of an optimal search strategy from a simple random walk.
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-09-06
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.
Schrader, Andrew J; Tribble, David R; Riddle, Mark S
2017-12-01
To inform policy and decision makers, a cost-effectiveness model was developed to predict the cost-effectiveness of implementing two hypothetical management strategies separately and concurrently on the mitigation of deployment-associated travelers' diarrhea (TD) burden. The first management strategy aimed to increase the likelihood that a deployed service member with TD will seek medical care earlier in the disease course compared with current patterns; the second strategy aimed to optimize provider treatment practices through the implementation of a Department of Defense Clinical Practice Guideline. Outcome measures selected to compare management strategies were duty days lost averted (DDL-averted) and a cost effectiveness ratio (CER) of cost per DDL-averted (USD/DDL-averted). Increasing health care and by seeking it more often and earlier in the disease course as a stand-alone management strategy produced more DDL (worse) than the base case (up to 8,898 DDL-gained per year) at an increased cost to the Department of Defense (CER $193). Increasing provider use of an optimal evidence-based treatment algorithm through Clinical Practice Guidelines prevented 5,299 DDL per year with overall cost savings (CER -$74). A combination of both strategies produced the greatest gain in DDL-averted (6,887) with a modest cost increase (CER $118). The application of this model demonstrates that changes in TD management during deployment can be implemented to reduce DDL with likely favorable impacts on mission capability and individual health readiness. The hypothetical combination strategy evaluated prevents the most DDL compared with current practice and is associated with a modest cost increase.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Wasko, Michael J; Pellegrene, Kendy A; Madura, Jeffry D; Surratt, Christopher K
2015-01-01
Hundreds of millions of U.S. dollars are invested in the research and development of a single drug. Lead compound development is an area ripe for new design strategies. Therapeutic lead candidates have been traditionally found using high-throughput in vitro pharmacological screening, a costly method for assaying thousands of compounds. This approach has recently been augmented by virtual screening (VS), which employs computer models of the target protein to narrow the search for possible leads. A variant of VS is fragment-based drug design (FBDD), an emerging in silico lead discovery method that introduces low-molecular weight fragments, rather than intact compounds, into the binding pocket of the receptor model. These fragments serve as starting points for "growing" the lead candidate. Current efforts in virtual FBDD within central nervous system (CNS) targets are reviewed, as is a recent rule-based optimization strategy in which new molecules are generated within a 3D receptor-binding pocket using the fragment as a scaffold. This process not only places special emphasis on creating synthesizable molecules but also exposes computational questions worth addressing. Fragment-based methods provide a viable, relatively low-cost alternative for therapeutic lead discovery and optimization that can be applied to CNS targets to augment current design strategies.
Wasko, Michael J.; Pellegrene, Kendy A.; Madura, Jeffry D.; Surratt, Christopher K.
2015-01-01
Hundreds of millions of U.S. dollars are invested in the research and development of a single drug. Lead compound development is an area ripe for new design strategies. Therapeutic lead candidates have been traditionally found using high-throughput in vitro pharmacological screening, a costly method for assaying thousands of compounds. This approach has recently been augmented by virtual screening (VS), which employs computer models of the target protein to narrow the search for possible leads. A variant of VS is fragment-based drug design (FBDD), an emerging in silico lead discovery method that introduces low-molecular weight fragments, rather than intact compounds, into the binding pocket of the receptor model. These fragments serve as starting points for “growing” the lead candidate. Current efforts in virtual FBDD within central nervous system (CNS) targets are reviewed, as is a recent rule-based optimization strategy in which new molecules are generated within a 3D receptor-binding pocket using the fragment as a scaffold. This process not only places special emphasis on creating synthesizable molecules but also exposes computational questions worth addressing. Fragment-based methods provide a viable, relatively low-cost alternative for therapeutic lead discovery and optimization that can be applied to CNS targets to augment current design strategies. PMID:26441817
Symbiosis-Based Alternative Learning Multi-Swarm Particle Swarm Optimization.
Niu, Ben; Huang, Huali; Tan, Lijing; Duan, Qiqi
2017-01-01
Inspired by the ideas from the mutual cooperation of symbiosis in natural ecosystem, this paper proposes a new variant of PSO, named Symbiosis-based Alternative Learning Multi-swarm Particle Swarm Optimization (SALMPSO). A learning probability to select one exemplar out of the center positions, the local best position, and the historical best position including the experience of internal and external multiple swarms, is used to keep the diversity of the population. Two different levels of social interaction within and between multiple swarms are proposed. In the search process, particles not only exchange social experience with others that are from their own sub-swarms, but also are influenced by the experience of particles from other fellow sub-swarms. According to the different exemplars and learning strategy, this model is instantiated as four variants of SALMPSO and a set of 15 test functions are conducted to compare with some variants of PSO including 10, 30 and 50 dimensions, respectively. Experimental results demonstrate that the alternative learning strategy in each SALMPSO version can exhibit better performance in terms of the convergence speed and optimal values on most multimodal functions in our simulation.
Enhanced and Conventional Project-Based Learning in an Engineering Design Module
ERIC Educational Resources Information Center
Chua, K. J.; Yang, W. M.; Leo, H. L.
2014-01-01
Engineering education focuses chiefly on students' ability to solve problems. While most engineering students are proficient in solving paper questions, they may not be proficient at providing optimal solutions to pragmatic project-based problems that require systematic learning strategy, innovation, problem-solving, and execution. The…
Practical synchronization on complex dynamical networks via optimal pinning control
NASA Astrophysics Data System (ADS)
Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu
2015-07-01
We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.
Microgrids and distributed generation systems: Control, operation, coordination and planning
NASA Astrophysics Data System (ADS)
Che, Liang
Distributed Energy Resources (DERs) which include distributed generations (DGs), distributed energy storage systems, and adjustable loads are key components in microgrid operations. A microgrid is a small electric power system integrated with on-site DERs to serve all or some portion of the local load and connected to the utility grid through the point of common coupling (PCC). Microgrids can operate in both grid-connected mode and island mode. The structure and components of hierarchical control for a microgrid at Illinois Institute of Technology (IIT) are discussed and analyzed. Case studies would address the reliable and economic operation of IIT microgrid. The simulation results of IIT microgrid operation demonstrate that the hierarchical control and the coordination strategy of distributed energy resources (DERs) is an effective way of optimizing the economic operation and the reliability of microgrids. The benefits and challenges of DC microgrids are addressed with a DC model for the IIT microgrid. We presented the hierarchical control strategy including the primary, secondary, and tertiary controls for economic operation and the resilience of a DC microgrid. The simulation results verify that the proposed coordinated strategy is an effective way of ensuring the resilient response of DC microgrids to emergencies and optimizing their economic operation at steady state. The concept and prototype of a community microgrid that interconnecting multiple microgrids in a community are proposed. Two works are conducted. For the coordination, novel three-level hierarchical coordination strategy to coordinate the optimal power exchanges among neighboring microgrids is proposed. For the planning, a multi-microgrid interconnection planning framework using probabilistic minimal cut-set (MCS) based iterative methodology is proposed for enhancing the economic, resilience, and reliability signals in multi-microgrid operations. The implementation of high-reliability microgrids requires proper protection schemes that effectively function in both grid-connected and island modes. This chapter presents a communication-assisted four-level hierarchical protection strategy for high-reliability microgrids, and tests the proposed protection strategy based on a loop structured microgrid. The simulation results demonstrate the proposed strategy to be an effective and efficient option for microgrid protection. Additionally, microgrid topology ought to be optimally planned. To address the microgrid topology planning, a graph-partitioning and integer-programming integrated methodology is proposed. This work is not included in the dissertation. Interested readers can refer to our related publication.
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182
An improved NSGA - II algorithm for mixed model assembly line balancing
NASA Astrophysics Data System (ADS)
Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong
2018-05-01
Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.
Optimal control for Malaria disease through vaccination
NASA Astrophysics Data System (ADS)
Munzir, Said; Nasir, Muhammad; Ramli, Marwan
2018-01-01
Malaria is a disease caused by an amoeba (single-celled animal) type of plasmodium where anopheles mosquito serves as the carrier. This study examines the optimal control problem of malaria disease spread based on Aron and May (1982) SIR type models and seeks the optimal solution by minimizing the prevention of the spreading of malaria by vaccine. The aim is to investigate optimal control strategies on preventing the spread of malaria by vaccination. The problem in this research is solved using analytical approach. The analytical method uses the Pontryagin Minimum Principle with the symbolic help of MATLAB software to obtain optimal control result and to analyse the spread of malaria with vaccination control.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
Design of optimal groundwater remediation systems under flexible environmental-standard constraints.
Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing
2015-01-01
In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.
NASA Astrophysics Data System (ADS)
Liang, Juhua; Tang, Sanyi; Cheke, Robert A.
2016-07-01
Pest resistance to pesticides is usually managed by switching between different types of pesticides. The optimal switching time, which depends on the dynamics of the pest population and on the evolution of the pesticide resistance, is critical. Here we address how the dynamic complexity of the pest population, the development of resistance and the spraying frequency of pulsed chemical control affect optimal switching strategies given different control aims. To do this, we developed novel discrete pest population growth models with both impulsive chemical control and the evolution of pesticide resistance. Strong and weak threshold conditions which guarantee the extinction of the pest population, based on the threshold values of the analytical formula for the optimal switching time, were derived. Further, we addressed switching strategies in the light of chosen economic injury levels. Moreover, the effects of the complex dynamical behaviour of the pest population on the pesticide switching times were also studied. The pesticide application period, the evolution of pesticide resistance and the dynamic complexity of the pest population may result in complex outbreak patterns, with consequent effects on the pesticide switching strategies.
NASA Astrophysics Data System (ADS)
Huo, Xianxu; Li, Guodong; Jiang, Ling; Wang, Xudong
2017-08-01
With the development of electricity market, distributed generation (DG) technology and related policies, regional energy suppliers are encouraged to build DG. Under this background, the concept of active distribution network (ADN) is put forward. In this paper, a bi-level model of intermittent DG considering benefit of regional energy suppliers is proposed. The objective of the upper level is the maximization of benefit of regional energy suppliers. On this basis, the lower level is optimized for each scene. The uncertainties of DG output and load of users, as well as four active management measures, which include demand-side management, curtailing the output power of DG, regulating reactive power compensation capacity and regulating the on-load tap changer, are considered. Harmony search algorithm and particle swarm optimization are combined as a hybrid strategy to solve the model. This model and strategy are tested with IEEE-33 node system, and results of case study indicate that the model and strategy successfully increase the capacity of DG and benefit of regional energy suppliers.
NASA Astrophysics Data System (ADS)
Enzenhöfer, R.; Geiges, A.; Nowak, W.
2011-12-01
Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill-delineated fractions of protection zones. Within an illustrative simplified 2D synthetic test case, we demonstrate our concept, involving synthetic transmissivity and head measurements for conditioning. We demonstrate the worth of optimally collected data in the context of protection zone delineation by assessing the reduced areal demand of delineated area at user-specified risk acceptance level. Results indicate that, thanks to optimally collected data, risk-aware delineation can be made at low to moderate additional costs compared to conventional delineation strategies.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Pek, Han Bin; Klement, Maximilian; Ang, Kok Siong; Chung, Bevan Kai-Sheng; Ow, Dave Siak-Wei; Lee, Dong-Yup
2015-01-01
Various isoforms of invertases from prokaryotes, fungi, and higher plants has been expressed in Escherichia coli, and codon optimisation is a widely-adopted strategy for improvement of heterologous enzyme expression. Successful synthetic gene design for recombinant protein expression can be done by matching its translational elongation rate against heterologous host organisms via codon optimization. Amongst the various design parameters considered for the gene synthesis, codon context bias has been relatively overlooked compared to individual codon usage which is commonly adopted in most of codon optimization tools. In addition, matching the rates of transcription and translation based on secondary structure may lead to enhanced protein folding. In this study, we evaluated codon context fitness as design criterion for improving the expression of thermostable invertase from Thermotoga maritima in Escherichia coli and explored the relevance of secondary structure regions for folding and expression. We designed three coding sequences by using (1) a commercial vendor optimized gene algorithm, (2) codon context for the whole gene, and (3) codon context based on the secondary structure regions. Then, the codon optimized sequences were transformed and expressed in E. coli. From the resultant enzyme activities and protein yield data, codon context fitness proved to have the highest activity as compared to the wild-type control and other criteria while secondary structure-based strategy is comparable to the control. Codon context bias was shown to be a relevant parameter for enhancing enzyme production in Escherichia coli by codon optimization. Thus, we can effectively design synthetic genes within heterologous host organisms using this criterion. Copyright © 2015 Elsevier Inc. All rights reserved.
Albietz, Julie M; Lenton, Lee M
2004-01-01
To identify evidence-based, best practice strategies for managing the ocular surface and tear film before, during, and after laser in situ keratomileusis (LASIK). After a comprehensive review of relevant published literature, evidence-based recommendations for best practice management strategies are presented. Symptoms of ocular irritation and signs of dysfunction of the integrated lacrimal gland/ocular surface functional gland unit are common before and after LASIK. The status of the ocular surface and tear film before LASIK can impact surgical outcomes in terms of potential complications during and after surgery, refractive outcome, optical quality, patient satisfaction, and the severity and duration of dry eye after LASIK. Before LASIK, the health of the ocular surface should be optimized and patients selected appropriately. Dry eye before surgery and female gender are risk factors for developing chronic dry eye after LASIK. Management of the ocular surface during LASIK can minimize ocular surface damage and the risk of adverse outcomes. Long-term management of the tear film and ocular surface after LASIK can reduce the severity and duration of dry eye symptoms and signs. Strategies to manage the integrated ocular surface/lacrimal gland functional unit before, during, and after LASIK can optimize outcomes. As problems with the ocular surface and tear film are relatively common, attention should focus on the use and improvement of evidence-based management strategies.
Free terminal time optimal control problem of an HIV model based on a conjugate gradient method.
Jang, Taesoo; Kwon, Hee-Dae; Lee, Jeehyun
2011-10-01
The minimum duration of treatment periods and the optimal multidrug therapy for human immunodeficiency virus (HIV) type 1 infection are considered. We formulate an optimal tracking problem, attempting to drive the states of the model to a "healthy" steady state in which the viral load is low and the immune response is strong. We study an optimal time frame as well as HIV therapeutic strategies by analyzing the free terminal time optimal tracking control problem. The minimum duration of treatment periods and the optimal multidrug therapy are found by solving the corresponding optimality systems with the additional transversality condition for the terminal time. We demonstrate by numerical simulations that the optimal dynamic multidrug therapy can lead to the long-term control of HIV by the strong immune response after discontinuation of therapy.
NASA Astrophysics Data System (ADS)
Setyaningsih, S.
2018-03-01
Lesson Study for Learning Community is one of lecturer profession building system through collaborative and continuous learning study based on the principles of openness, collegiality, and mutual learning to build learning community in order to form professional learning community. To achieve the above, we need a strategy and learning method with specific subscription technique. This paper provides a description of how the quality of learning in the field of science can be improved by implementing strategies and methods accordingly, namely by applying lesson study for learning community optimally. Initially this research was focused on the study of instructional techniques. Learning method used is learning model Contextual teaching and Learning (CTL) and model of Problem Based Learning (PBL). The results showed that there was a significant increase in competence, attitudes, and psychomotor in the four study programs that were modelled. Therefore, it can be concluded that the implementation of learning strategies in Lesson study for Learning Community is needed to be used to improve the competence, attitude and psychomotor of science students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gildersleeve, C.W.
An interdisciplinary analysis of the post-Cold War world to determine the optimal strategy to attain the national interests of the United States, and the requisite logistic structure to support that strategy. The optimal solution is found to be a strategy based on multinational defense centered on a permanent force of United Nations garrison port complexes. This multilateral force would be augmented by as small a national defense force as necessary to ensure national security. The theses endeavors to reconnect the cultural and philosophical past of the United States with its immediate future. National interests are identified through examination of Americanmore » Pragmatism and the philosophies of John Locke and Jean-Jacques Rousseau. To determine the current status of common defense, based upon the Foreign Military Sales system, and analysis of current data is accomplished. Future threats to the United States are examined with special emphasis on nuclear terrorism. The ability of Islamic nations in North Africa and the Middle East to produce significant quantities of uranium is demonstrated. The grave political as well as ongoing environmental consequences of this recent capability are discussed in detail.« less
The study on the control strategy of micro grid considering the economy of energy storage operation
NASA Astrophysics Data System (ADS)
Ma, Zhiwei; Liu, Yiqun; Wang, Xin; Li, Bei; Zeng, Ming
2017-08-01
To optimize the running of micro grid to guarantee the supply and demand balance of electricity, and to promote the utilization of renewable energy. The control strategy of micro grid energy storage system is studied. Firstly, the mixed integer linear programming model is established based on the receding horizon control. Secondly, the modified cuckoo search algorithm is proposed to calculate the model. Finally, a case study is carried out to study the signal characteristic of micro grid and batteries under the optimal control strategy, and the convergence of the modified cuckoo search algorithm is compared with others to verify the validity of the proposed model and method. The results show that, different micro grid running targets can affect the control strategy of energy storage system, which further affect the signal characteristics of the micro grid. Meanwhile, the convergent speed, computing time and the economy of the modified cuckoo search algorithm are improved compared with the traditional cuckoo search algorithm and differential evolution algorithm.
2011-04-30
a BS degree in Mathematics and an MS degree in Statistics and Financial and Actuarial Mathematics from Kiev National Taras Shevchenko University...degrees from Rutgers University in Industrial Engineering (PhD and MS) and Statistics (MS) and from Universidad Nacional Autonoma de Mexico in Actuarial ...Science. His research efforts focus on developing mathematical models for the analysis, computation, and optimization of system performance with
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Redundancy allocation problem for k-out-of- n systems with a choice of redundancy strategies
NASA Astrophysics Data System (ADS)
Aghaei, Mahsa; Zeinal Hamadani, Ali; Abouei Ardakan, Mostafa
2017-03-01
To increase the reliability of a specific system, using redundant components is a common method which is called redundancy allocation problem (RAP). Some of the RAP studies have focused on k-out-of- n systems. However, all of these studies assumed predetermined active or standby strategies for each subsystem. In this paper, for the first time, we propose a k-out-of- n system with a choice of redundancy strategies. Therefore, a k-out-of- n series-parallel system is considered when the redundancy strategy can be chosen for each subsystem. In other words, in the proposed model, the redundancy strategy is considered as an additional decision variable and an exact method based on integer programming is used to obtain the optimal solution of the problem. As the optimization of RAP belongs to the NP-hard class of problems, a modified version of genetic algorithm (GA) is also developed. The exact method and the proposed GA are implemented on a well-known test problem and the results demonstrate the efficiency of the new approach compared with the previous studies.
An optical fusion gate for W-states
NASA Astrophysics Data System (ADS)
Özdemir, Ş. K.; Matsunaga, E.; Tashima, T.; Yamamoto, T.; Koashi, M.; Imoto, N.
2011-10-01
We introduce a simple optical gate to fuse arbitrary-size polarization entangled W-states to prepare larger W-states. The gate requires a polarizing beam splitter (PBS), a half-wave plate (HWP) and two photon detectors. We study, numerically and analytically, the necessary resource consumption for preparing larger W-states by fusing smaller ones with the proposed fusion gate. We show analytically that resource requirement scales at most sub-exponentially with the increasing size of the state to be prepared. We numerically determine the resource cost for fusion without recycling where W-states of arbitrary size can be optimally prepared. Moreover, we introduce another strategy that is based on recycling and outperforms the optimal strategy for the non-recycling case.
NASA Astrophysics Data System (ADS)
Yin, Chuancun; Wang, Chunwei
2009-11-01
The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.
Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353
Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems.
Congestion Pricing for Aircraft Pushback Slot Allocation.
Liu, Lihua; Zhang, Yaping; Liu, Lan; Xing, Zhiwei
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the "external cost of surface congestion" is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm.
Coupled attitude-orbit dynamics and control for an electric sail in a heliocentric transfer mission.
Huo, Mingying; Zhao, Jun; Xie, Shaobiao; Qi, Naiming
2015-01-01
The paper discusses the coupled attitude-orbit dynamics and control of an electric-sail-based spacecraft in a heliocentric transfer mission. The mathematical model characterizing the propulsive thrust is first described as a function of the orbital radius and the sail angle. Since the solar wind dynamic pressure acceleration is induced by the sail attitude, the orbital and attitude dynamics of electric sails are coupled, and are discussed together. Based on the coupled equations, the flight control is investigated, wherein the orbital control is studied in an optimal framework via a hybrid optimization method and the attitude controller is designed based on feedback linearization control. To verify the effectiveness of the proposed control strategy, a transfer problem from Earth to Mars is considered. The numerical results show that the proposed strategy can control the coupled system very well, and a small control torque can control both the attitude and orbit. The study in this paper will contribute to the theory study and application of electric sail.
Congestion Pricing for Aircraft Pushback Slot Allocation
Zhang, Yaping
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the “external cost of surface congestion” is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm. PMID:28114429
Coupled Attitude-Orbit Dynamics and Control for an Electric Sail in a Heliocentric Transfer Mission
Huo, Mingying; Zhao, Jun; Xie, Shaobiao; Qi, Naiming
2015-01-01
The paper discusses the coupled attitude-orbit dynamics and control of an electric-sail-based spacecraft in a heliocentric transfer mission. The mathematical model characterizing the propulsive thrust is first described as a function of the orbital radius and the sail angle. Since the solar wind dynamic pressure acceleration is induced by the sail attitude, the orbital and attitude dynamics of electric sails are coupled, and are discussed together. Based on the coupled equations, the flight control is investigated, wherein the orbital control is studied in an optimal framework via a hybrid optimization method and the attitude controller is designed based on feedback linearization control. To verify the effectiveness of the proposed control strategy, a transfer problem from Earth to Mars is considered. The numerical results show that the proposed strategy can control the coupled system very well, and a small control torque can control both the attitude and orbit. The study in this paper will contribute to the theory study and application of electric sail. PMID:25950179
Yu, Minghao; Lin, Dun; Feng, Haobin; Zeng, Yinxiang; Tong, Yexiang; Lu, Xihong
2017-05-08
The voltage of carbon-based aqueous supercapacitors is limited by the water splitting reaction occurring in one electrode, generally resulting in the promising but unused potential range of the other electrode. Exploiting this unused potential range provides the possibility for further boosting their energy density. An efficient surface charge control strategy was developed to remarkably enhance the energy density of multiscale porous carbon (MSPC) based aqueous symmetric supercapacitors (SSCs) by controllably tuning the operating potential range of MSPC electrodes. The operating voltage of the SSCs with neutral electrolyte was significantly expanded from 1.4 V to 1.8 V after simple adjustment, enabling the energy density of the optimized SSCs reached twice as much as the original. Such a facile strategy was also demonstrated for the aqueous SSCs with acidic and alkaline electrolytes, and is believed to bring insight in the design of aqueous supercapacitors. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Klepiszewski, K; Schmitt, T G
2002-01-01
While conventional rule based, real time flow control of sewer systems is in common use, control systems based on fuzzy logic have been used only rarely, but successfully. The intention of this study is to compare a conventional rule based control of a combined sewer system with a fuzzy logic control by using hydrodynamic simulation. The objective of both control strategies is to reduce the combined sewer overflow volume by an optimization of the utilized storage capacities of four combined sewer overflow tanks. The control systems affect the outflow of four combined sewer overflow tanks depending on the water levels inside the structures. Both systems use an identical rule base. The developed control systems are tested and optimized for a single storm event which affects heterogeneously hydraulic load conditions and local discharge. Finally the efficiencies of the two different control systems are compared for two more storm events. The results indicate that the conventional rule based control and the fuzzy control similarly reach the objective of the control strategy. In spite of the higher expense to design the fuzzy control system its use provides no advantages in this case.
Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon
2015-06-01
Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both the discount rate and the climatic patterns on optimal harvest strategics. In general, decreases in either the discount rate or in the frequency of favorable weather patterns lcd to a more conservative defoliation policy. This did not hold, however, for plants in states of low vigor. Optimal control for shadscale and winterfat tended to stabilize on a policy of heavy defoliation stress, followed by one or more seasons of rest. Big sagebrush required a policy of heavy summer defoliation when sufficient active shoot material is present at the beginning of the growing season. The comparison of fixed and optimal strategies indicated considerable improvement in defoliation yields when optimal strategies are followed. The superior performance was attributable to increased defoliation of plants in states of high vigor. Improvements were found for both discounted and undiscounted yields.
Modeling human decision making behavior in supervisory control
NASA Technical Reports Server (NTRS)
Tulga, M. K.; Sheridan, T. B.
1977-01-01
An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
Tire-road friction estimation and traction control strategy for motorized electric vehicle.
Jin, Li-Qiang; Ling, Mingze; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS).
Tire-road friction estimation and traction control strategy for motorized electric vehicle
Jin, Li-Qiang; Yue, Weiqiang
2017-01-01
In this paper, an optimal longitudinal slip ratio system for real-time identification of electric vehicle (EV) with motored wheels is proposed based on the adhesion between tire and road surface. First and foremost, the optimal longitudinal slip rate torque control can be identified in real time by calculating the derivative and slip rate of the adhesion coefficient. Secondly, the vehicle speed estimation method is also brought. Thirdly, an ideal vehicle simulation model is proposed to verify the algorithm with simulation, and we find that the slip ratio corresponds to the detection of the adhesion limit in real time. Finally, the proposed strategy is applied to traction control system (TCS). The results showed that the method can effectively identify the state of wheel and calculate the optimal slip ratio without wheel speed sensor; in the meantime, it can improve the accelerated stability of electric vehicle with traction control system (TCS). PMID:28662053
Further developments in the controlled growth approach for optimal structural synthesis
NASA Technical Reports Server (NTRS)
Hajela, P.
1982-01-01
It is pointed out that the use of nonlinear programming methods in conjunction with finite element and other discrete analysis techniques have provided a powerful tool in the domain of optimal structural synthesis. The present investigation is concerned with new strategies which comprise an extension to the controlled growth method considered by Hajela and Sobieski-Sobieszczanski (1981). This method proposed an approach wherein the standard nonlinear programming (NLP) methodology of working with a very large number of design variables was replaced by a sequence of smaller optimization cycles, each involving a single 'dominant' variable. The current investigation outlines some new features. Attention is given to a modified cumulative constraint representation which is defined in both the feasible and infeasible domain of the design space. Other new features are related to the evaluation of the 'effectiveness measure' on which the choice of the dominant variable and the linking strategy is based.
Optimal ventilation of the anesthetized pediatric patient.
Feldman, Jeffrey M
2015-01-01
Mechanical ventilation of the pediatric patient is challenging because small changes in delivered volume can be a significant fraction of the intended tidal volume. Anesthesia ventilators have traditionally been poorly suited to delivering small tidal volumes accurately, and pressure-controlled ventilation has become used commonly when caring for pediatric patients. Modern anesthesia ventilators are designed to deliver small volumes accurately to the patient's airway by compensating for the compliance of the breathing system and delivering tidal volume independent of fresh gas flow. These technology advances provide the opportunity to implement a lung-protective ventilation strategy in the operating room based upon control of tidal volume. This review will describe the capabilities of the modern anesthesia ventilator and the current understanding of lung-protective ventilation. An optimal approach to mechanical ventilation for the pediatric patient is described, emphasizing the importance of using bedside monitors to optimize the ventilation strategy for the individual patient.
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines
Presas, Alexandre; Valero, Carme; Egusquiza, Eduard
2018-01-01
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin. PMID:29601512
Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines.
Presas, Alexandre; Valentin, David; Egusquiza, Mònica; Valero, Carme; Egusquiza, Eduard
2018-03-30
Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin.
Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice.
Howes, Andrew; Duggan, Geoffrey B; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L
2016-07-01
It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are boundedly optimal. Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and external resources are boundedly optimal where optimality is defined relative to experience, cognitive constraints, and reward. The theory makes predictions that are tested against data, not fitted to it. The experiments use a no-choice/choice utility learning paradigm where the no-choice phase is used to elicit a profile of each participant's performance across the strategy space and the choice phase is used to test predicted choices within this space. They show that the majority of individuals select strategies that are boundedly optimal. Further, individual differences in what people choose to do are successfully predicted by the analysis. Two issues are discussed: (a) the performance of the minority of participants who did not find boundedly optimal adaptations, and (b) the possibility that individuals anticipate what, with practice, will become a bounded optimal strategy, rather than what is boundedly optimal during training. Copyright © 2015 Cognitive Science Society, Inc.
Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun
2018-07-01
This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Ishihara, Tsukasa; Koga, Yuji; Iwatsuki, Yoshiyuki; Hirayama, Fukushi
2015-01-15
Anticoagulant agents have emerged as a promising class of therapeutic drugs for the treatment and prevention of arterial and venous thrombosis. We investigated a series of novel orally active factor Xa inhibitors designed using our previously reported conjugation strategy to boost oral anticoagulant effect. Structural optimization of anthranilamide derivative 3 as a lead compound with installation of phenolic hydroxyl group and extensive exploration of the P1 binding element led to the identification of 5-chloro-N-(5-chloro-2-pyridyl)-3-hydroxy-2-{[4-(4-methyl-1,4-diazepan-1-yl)benzoyl]amino}benzamide (33, AS1468240) as a potent factor Xa inhibitor with significant oral anticoagulant activity. We also reported a newly developed Free-Wilson-like fragment recommender system based on the integration of R-group decomposition with collaborative filtering for the structural optimization process. Copyright © 2014 Elsevier Ltd. All rights reserved.
Matuszak, Martha M; Steers, Jennifer M; Long, Troy; McShan, Daniel L; Fraass, Benedick A; Romeijn, H Edwin; Ten Haken, Randall K
2013-07-01
To introduce a hybrid volumetric modulated arc therapy/intensity modulated radiation therapy (VMAT/IMRT) optimization strategy called FusionArc that combines the delivery efficiency of single-arc VMAT with the potentially desirable intensity modulation possible with IMRT. A beamlet-based inverse planning system was enhanced to combine the advantages of VMAT and IMRT into one comprehensive technique. In the hybrid strategy, baseline single-arc VMAT plans are optimized and then the current cost function gradients with respect to the beamlets are used to define a metric for predicting which beam angles would benefit from further intensity modulation. Beams with the highest metric values (called the gradient factor) are converted from VMAT apertures to IMRT fluence, and the optimization proceeds with the mixed variable set until convergence or until additional beams are selected for conversion. One phantom and two clinical cases were used to validate the gradient factor and characterize the FusionArc strategy. Comparisons were made between standard IMRT, single-arc VMAT, and FusionArc plans with one to five IMRT∕hybrid beams. The gradient factor was found to be highly predictive of the VMAT angles that would benefit plan quality the most from beam modulation. Over the three cases studied, a FusionArc plan with three converted beams achieved superior dosimetric quality with reductions in final cost ranging from 26.4% to 48.1% compared to single-arc VMAT. Additionally, the three beam FusionArc plans required 22.4%-43.7% fewer MU∕Gy than a seven beam IMRT plan. While the FusionArc plans with five converted beams offer larger reductions in final cost--32.9%-55.2% compared to single-arc VMAT--the decrease in MU∕Gy compared to IMRT was noticeably smaller at 12.2%-18.5%, when compared to IMRT. A hybrid VMAT∕IMRT strategy was implemented to find a high quality compromise between gantry-angle and intensity-based degrees of freedom. This optimization method will allow patients to be simultaneously planned for dosimetric quality and delivery efficiency without switching between delivery techniques. Example phantom and clinical cases suggest that the conversion of only three VMAT segments to modulated beams may result in a good combination of quality and efficiency.
Hardt, Oliver; Nadel, Lynn
2009-01-01
Cognitive map theory suggested that exploring an environment and attending to a stimulus should lead to its integration into an allocentric environmental representation. We here report that directed attention in the form of exploration serves to gather information needed to determine an optimal spatial strategy, given task demands and characteristics of the environment. Attended environmental features may integrate into spatial representations if they meet the requirements of the optimal spatial strategy: when learning involves a cognitive mapping strategy, cues with high codability (e.g., concrete objects) will be incorporated into a map, but cues with low codability (e.g., abstract paintings) will not. However, instructions encouraging map learning can lead to the incorporation of cues with low codability. On the other hand, if spatial learning is not map-based, abstract cues can and will be used to encode locations. Since exploration appears to determine what strategy to apply and whether or not to encode a cue, recognition memory for environmental features is independent of whether or not a cue is part of a spatial representation. In fact, when abstract cues were used in a way that was not map-based, or when they were not used for spatial navigation at all, they were nevertheless recognized as familiar. Thus, the relation between exploratory activity on the one hand and spatial strategy and memory on the other appears more complex than initially suggested by cognitive map theory.
Contribution to the Optimization of Strategy of Maintenance by Lean Six Sigma
NASA Astrophysics Data System (ADS)
Youssouf, Ayadi; Rachid, Chaib; Ion, Verzea
The efficiency of the maintenance of the industrial systems is a major economic stake for their business concern. The main difficulties and the sources of ineffectiveness live in the choice of the actions of maintenance especially when the machine plays a vital role in the process of production. But as Algeria has embarked on major infrastructure projects in transport, housing, automobile, manufacturing industry and construction (factories, housing, highway, subway, tram, etc.) requiring new implications on maintenance strategies that meet industry requirements imposed by the exploitation. From then on and seen the importance of the maintenance on the economic market and sound impacts on the performances of the installations, methods of optimization were developed. For this purpose, to ensure the survival of businesses, be credible, contributing and competitive in the market, maintenance services must continually adapt to the progress of technical areas, technological and organizational even help maintenance managers to construct or to modify maintenance strategies, objective of this work. Our contribution in this work focuses on the optimization of maintenance for industrial systems by the use of Lean six Sigma bases. Lean Six Sigma is a method of improving the quality and profitability based on mastering statically of process and it is also a management style that based on a highly regulated organization dedicated to managing project. The method is based on five main steps summarized in the acronym (DMAIC): Define Measure, Analyze, Improve and Control. Application of the method on the maintenance processes with using maintenance methods during the five phases of the method will help to reduce costs and losses in order to strive for optimum results in terms of profit and quality.
Improving knowledge of garlic paste greening through the design of an experimental strategy.
Aguilar, Miguel; Rincón, Francisco
2007-12-12
The furthering of scientific knowledge depends in part upon the reproducibility of experimental results. When experimental conditions are not set with sufficient precision, the resulting background noise often leads to poorly reproduced and even faulty experiments. An example of the catastrophic consequences of this background noise can be found in the design of strategies for the development of solutions aimed at preventing garlic paste greening, where reported results are contradictory. To avoid such consequences, this paper presents a two-step strategy based on the concept of experimental design. In the first step, the critical factors inherent to the problem are identified, using a 2(III)(7-4) Plackett-Burman experimental design, from a list of seven apparent critical factors (ACF); subsequently, the critical factors thus identified are considered as the factors to be optimized (FO), and optimization is performed using a Box and Wilson experimental design to identify the stationary point of the system. Optimal conditions for preventing garlic greening are examined after analysis of the complex process of green-pigment development, which involves both chemical and enzymatic reactions and is strongly influenced by pH, with an overall pH optimum of 4.5. The critical step in the greening process is the synthesis of thiosulfinates (allicin) from cysteine sulfoxides (alliin). Cysteine inhibits the greening process at this critical stage; no greening precursors are formed in the presence of around 1% cysteine. However, the optimal conditions for greening prevention are very sensitive both to the type of garlic and to manufacturing conditions. This suggests that optimal solutions for garlic greening prevention should be sought on a case-by-case basis, using the strategy presented here.
Are strategies in physics discrete? A remote controlled investigation
NASA Astrophysics Data System (ADS)
Heck, Robert; Sherson, Jacob F.; www. scienceathome. org Team; players Team
2017-04-01
In science, strategies are formulated based on observations, calculations, or physical insight. For any given physical process, often several distinct strategies are identified. Are these truly distinct or simply low dimensional representations of a high dimensional continuum of solutions? Our online citizen science platform www.scienceathome.org used by more than 150,000 people recently enabled finding solutions to fast, 1D single atom transport [Nature2016]. Surprisingly, player trajectories bunched into discrete solution strategies (clans) yielding clear, distinct physical insight. Introducing the multi-dimensional vector in the direction of other local maxima we locate narrow, high-yield ``bridges'' connecting the clans. This demonstrates for this problem that a continuum of solutions with no clear physical interpretation does in fact exist. Next, four distinct strategies for creating Bose-Einstein condensates were investigated experimentally: hybrid and crossed dipole trap configurations in combination with either large volume or dimple loading from a magnetic trap. We find that although each conventional strategy appears locally optimal, ``bridges'' can be identified. In a novel approach, the problem was gamified allowing 750 citizen scientists to contribute to the experimental optimization yielding nearly a factor two improvement in atom number.
Multidimensional optimal droop control for wind resources in DC microgrids
NASA Astrophysics Data System (ADS)
Bunker, Kaitlyn J.
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
Strategy community development based on local resources
NASA Astrophysics Data System (ADS)
Meirinawati; Prabawati, I.; Pradana, G. W.
2018-01-01
The problem of progressing regions is not far from economic problems and is often caused by the inability of the regions in response to changes in economic conditions that occur, so the need for community development programs implemented to solve various problems. Improved community effort required with the real conditions and needs of each region. Community development based on local resources process is very important, because it is an increase in human resource capability in the optimal utilization of local resource potential. In this case a strategy is needed in community development based on local resources. The community development strategy are as follows:(1) “Eight Line Equalization Plus” which explains the urgency of rural industrialization, (2) the construction of the village will be more successful when combining strategies are tailored to regional conditions, (3) the escort are positioning themselves as the Planner, supervisor, information giver, motivator, facilitator, connecting at once evaluators.
FPGA Techniques Based New Hybrid Modulation Strategies for Voltage Source Inverters
Sudha, L. U.; Baskaran, J.; Elankurisil, S. A.
2015-01-01
This paper corroborates three different hybrid modulation strategies suitable for single-phase voltage source inverter. The proposed method is formulated using fundamental switching and carrier based pulse width modulation methods. The main tale of this proposed method is to optimize a specific performance criterion, such as minimization of the total harmonic distortion (THD), lower order harmonics, switching losses, and heat losses. The proposed method is articulated using fundamental switching and carrier based pulse width modulation methods. Thus, the harmonic pollution in the power system will be reduced and the power quality will be augmented with better harmonic profile for a target fundamental output voltage. The proposed modulation strategies are simulated in MATLAB r2010a and implemented in a Xilinx spartan 3E-500 FG 320 FPGA processor. The feasibility of these modulation strategies is authenticated through simulation and experimental results. PMID:25821852
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
Barlow, P.M.; Wagner, B.J.; Belitz, K.
1996-01-01
The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
Optimizing Tactics for Use of the U.S. Antiviral Strategic National Stockpile for Pandemic Influenza
Dimitrov, Nedialko B.; Goll, Sebastian; Hupert, Nathaniel; Pourbohloul, Babak; Meyers, Lauren Ancel
2011-01-01
In 2009, public health agencies across the globe worked to mitigate the impact of the swine-origin influenza A (pH1N1) virus. These efforts included intensified surveillance, social distancing, hygiene measures, and the targeted use of antiviral medications to prevent infection (prophylaxis). In addition, aggressive antiviral treatment was recommended for certain patient subgroups to reduce the severity and duration of symptoms. To assist States and other localities meet these needs, the U.S. Government distributed a quarter of the antiviral medications in the Strategic National Stockpile within weeks of the pandemic's start. However, there are no quantitative models guiding the geo-temporal distribution of the remainder of the Stockpile in relation to pandemic spread or severity. We present a tactical optimization model for distributing this stockpile for treatment of infected cases during the early stages of a pandemic like 2009 pH1N1, prior to the wide availability of a strain-specific vaccine. Our optimization method efficiently searches large sets of intervention strategies applied to a stochastic network model of pandemic influenza transmission within and among U.S. cities. The resulting optimized strategies depend on the transmissability of the virus and postulated rates of antiviral uptake and wastage (through misallocation or loss). Our results suggest that an aggressive community-based antiviral treatment strategy involving early, widespread, pro-rata distribution of antivirals to States can contribute to slowing the transmission of mildly transmissible strains, like pH1N1. For more highly transmissible strains, outcomes of antiviral use are more heavily impacted by choice of distribution intervals, quantities per shipment, and timing of shipments in relation to pandemic spread. This study supports previous modeling results suggesting that appropriate antiviral treatment may be an effective mitigation strategy during the early stages of future influenza pandemics, increasing the need for systematic efforts to optimize distribution strategies and provide tactical guidance for public health policy-makers. PMID:21283514
Rate-Based Model Predictive Control of Turbofan Engine Clearance
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan A.
2006-01-01
An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.