Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
NASA Astrophysics Data System (ADS)
Yin, Chuancun; Wang, Chunwei
2009-11-01
The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.
Yan, Bin-Jun; Guo, Zheng-Tai; Qu, Hai-Bin; Zhao, Bu-Chang; Zhao, Tao
2013-06-01
In this work, a feedforward control strategy basing on the concept of quality by design was established for the manufacturing process of traditional Chinese medicine to reduce the impact of the quality variation of raw materials on drug. In the research, the ethanol precipitation process of Danhong injection was taken as an application case of the method established. Box-Behnken design of experiments was conducted. Mathematical models relating the attributes of the concentrate, the process parameters and the quality of the supernatants produced were established. Then an optimization model for calculating the best process parameters basing on the attributes of the concentrate was built. The quality of the supernatants produced by ethanol precipitation with optimized and non-optimized process parameters were compared. The results showed that using the feedforward control strategy for process parameters optimization can control the quality of the supernatants effectively. The feedforward control strategy proposed can enhance the batch-to-batch consistency of the supernatants produced by ethanol precipitation.
Optimal teaching strategy in periodic impulsive knowledge dissemination system.
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang; Liu, Jian-Guo
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination.
Optimal teaching strategy in periodic impulsive knowledge dissemination system
Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang
2017-01-01
Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination. PMID:28665961
Validation of optimization strategies using the linear structured production chains
NASA Astrophysics Data System (ADS)
Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz
2017-06-01
Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
Multiobjective optimization of temporal processes.
Song, Zhe; Kusiak, Andrew
2010-06-01
This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.
A new strategy of glucose supply in a microbial fermentation model
NASA Astrophysics Data System (ADS)
Kasbawati, Gunawan, A. Y.; Sidarto, K. A.; Hertadi, R.
2015-09-01
Strategy of glucose supply to achieve an optimal productivity of ethanol production of a yeast cell is one of the main features in a microbial fermentation process. Beside a known continuous glucose supply, in this study we consider a new supply strategy so called the on-off supply. An optimal control theory is applied to the fermentation system to find the optimal rate of glucose supply and time of supply. The optimization problem is solved numerically using Differential Evolutionary algorithm. We find two alternative solutions that we can choose to get the similar result: either long period process with low supply or short period process with high glucose supply.
ERIC Educational Resources Information Center
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
A neural network strategy for end-point optimization of batch processes.
Krothapally, M; Palanki, S
1999-01-01
The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.
Emergency strategy optimization for the environmental control system in manned spacecraft
NASA Astrophysics Data System (ADS)
Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin
2018-02-01
It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.
Methods for Maximizing the Learning Process: A Theoretical and Experimental Analysis.
ERIC Educational Resources Information Center
Atkinson, Richard C.
This research deals with optimizing the instructional process. The approach adopted was to limit consideration to simple learning tasks for which adequate mathematical models could be developed. Optimal or suitable suboptimal instructional strategies were developed for the models. The basic idea was to solve for strategies that either maximize the…
Capital dissipation minimization for a class of complex irreversible resource exchange processes
NASA Astrophysics Data System (ADS)
Xia, Shaojun; Chen, Lingen
2017-05-01
A model of a class of irreversible resource exchange processes (REPes) between a firm and a producer with commodity flow leakage from the producer to a competitive market is established in this paper. The REPes are assumed to obey the linear commodity transfer law (LCTL). Optimal price paths for capital dissipation minimization (CDM) (it can measure economic process irreversibility) are obtained. The averaged optimal control theory is used. The optimal REP strategy is also compared with other strategies, such as constant-firm-price operation and constant-commodity-flow operation, and effects of the amount of commodity transferred and the commodity flow leakage on the optimal REP strategy are also analyzed. The commodity prices of both the producer and the firm for the CDM of the REPes with commodity flow leakage change with the time exponentially.
Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process
Yuen, Kam Chuen; Shen, Ying
2015-01-01
We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655
NASA Astrophysics Data System (ADS)
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA evaluates possible solutions based on cost, cycle time, reworkability and rework benefit. Finally it provides several possible solutions because this is a multi-objective optimization problem. The solutions are presented as chromosomes that clearly state the amount and location of the rework stations. The user analyzes these solutions in order to select one by deciding which of the four factors considered is most important depending on the product being manufactured or the company's objective. The major contribution of this study is to provide the user with a methodology used to identify an effective and optimal QC strategy that incorporates the number and location of rework substations in order to minimize direct product cost, and cycle time, and maximize reworkability, and rework benefit.
An optimizing start-up strategy for a bio-methanator.
Sbarciog, Mihaela; Loccufier, Mia; Vande Wouwer, Alain
2012-05-01
This paper presents an optimizing start-up strategy for a bio-methanator. The goal of the control strategy is to maximize the outflow rate of methane in anaerobic digestion processes, which can be described by a two-population model. The methodology relies on a thorough analysis of the system dynamics and involves the solution of two optimization problems: steady-state optimization for determining the optimal operating point and transient optimization. The latter is a classical optimal control problem, which can be solved using the maximum principle of Pontryagin. The proposed control law is of the bang-bang type. The process is driven from an initial state to a small neighborhood of the optimal steady state by switching the manipulated variable (dilution rate) from the minimum to the maximum value at a certain time instant. Then the dilution rate is set to the optimal value and the system settles down in the optimal steady state. This control law ensures the convergence of the system to the optimal steady state and substantially increases its stability region. The region of attraction of the steady state corresponding to maximum production of methane is considerably enlarged. In some cases, which are related to the possibility of selecting the minimum dilution rate below a certain level, the stability region of the optimal steady state equals the interior of the state space. Aside its efficiency, which is evaluated not only in terms of biogas production but also from the perspective of treatment of the organic load, the strategy is also characterized by simplicity, being thus appropriate for implementation in real-life systems. Another important advantage is its generality: this technique may be applied to any anaerobic digestion process, for which the acidogenesis and methanogenesis are, respectively, characterized by Monod and Haldane kinetics.
Interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies. The optimum strategy is also compared to the strategies used by subjects in a pilot experiment.
Smith, David R.; McRae, Sarah E.; Augspurger, Tom; Ratcliffe, Judith A.; Nichols, Robert B.; Eads, Chris B.; Savidge, Tim; Bogan, Arthur E.
2015-01-01
We used a structured decision-making process to develop conservation strategies to increase persistence of Dwarf Wedgemussel (Alasmidonta heterodon) in North Carolina, USA, while accounting for uncertainty in management effectiveness and considering costs. Alternative conservation strategies were portfolios of management actions that differed by location of management actions on the landscape. Objectives of the conservation strategy were to maximize species persistence, maintain genetic diversity, maximize public support, and minimize management costs. We compared 4 conservation strategies: 1) the ‘status quo’ strategy represented current management, 2) the ‘protect the best’ strategy focused on protecting the best populations in the Tar River basin, 3) the ‘expand the distribution’ strategy focused on management of extant populations and establishment of new populations in the Neuse River basin, and 4) the ‘hybrid’ strategy combined elements of each strategy to balance conservation in the Tar and Neuse River basins. A population model informed requirements for population management, and experts projected performance of alternative strategies over a 20-y period. The optimal strategy depended on the relative value placed on competing objectives, which can vary among stakeholders. The protect the best and hybrid strategies were optimal across a wide range of relative values with 2 exceptions: 1) if minimizing management cost was of overriding concern, then status quo was optimal, or 2) if maximizing population persistence in the Neuse River basin was emphasized, then expand the distribution strategy was optimal. The optimal strategy was robust to uncertainty in management effectiveness. Overall, the structured decision process can help identify the most promising strategies for endangered species conservation that maximize conservation benefit given the constraint of limited funding.
2016-09-01
PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.
Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
Hierarchical optimal control of large-scale nonlinear chemical processes.
Ramezani, Mohammad Hossein; Sadati, Nasser
2009-01-01
In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.
Long-Run Savings and Investment Strategy Optimization
Gerrard, Russell; Guillén, Montserrat; Pérez-Marín, Ana M.
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration. PMID:24711728
Long-run savings and investment strategy optimization.
Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M
2014-01-01
We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.
Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process
NASA Astrophysics Data System (ADS)
Yan, Wei; Chang, Yuwen
2016-12-01
Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.
NASA Astrophysics Data System (ADS)
Tan, Yang; Srinivasan, Vasudevan; Nakamura, Toshio; Sampath, Sanjay; Bertrand, Pierre; Bertrand, Ghislaine
2012-09-01
The properties and performance of plasma-sprayed thermal barrier coatings (TBCs) are strongly dependent on the microstructural defects, which are affected by starting powder morphology and processing conditions. Of particular interest is the use of hollow powders which not only allow for efficient melting of zirconia ceramics but also produce lower conductivity and more compliant coatings. Typical industrial hollow spray powders have an assortment of densities resulting in masking potential advantages of the hollow morphology. In this study, we have conducted process mapping strategies using a novel uniform shell thickness hollow powder to control the defect microstructure and properties. Correlations among coating properties, microstructure, and processing reveal feasibility to produce highly compliant and low conductivity TBC through a combination of optimized feedstock and processing conditions. The results are presented through the framework of process maps establishing correlations among process, microstructure, and properties and providing opportunities for optimization of TBCs.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Datta, Bithin
2011-07-01
Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
Zhang, Litao; Cvijic, Mary Ellen; Lippy, Jonathan; Myslik, James; Brenner, Stephen L; Binnie, Alastair; Houston, John G
2012-07-01
In this paper, we review the key solutions that enabled evolution of the lead optimization screening support process at Bristol-Myers Squibb (BMS) between 2004 and 2009. During this time, technology infrastructure investment and scientific expertise integration laid the foundations to build and tailor lead optimization screening support models across all therapeutic groups at BMS. Together, harnessing advanced screening technology platforms and expanding panel screening strategy led to a paradigm shift at BMS in supporting lead optimization screening capability. Parallel SAR and structure liability relationship (SLR) screening approaches were first and broadly introduced to empower more-rapid and -informed decisions about chemical synthesis strategy and to broaden options for identifying high-quality drug candidates during lead optimization. Copyright © 2012 Elsevier Ltd. All rights reserved.
Multi-objective optimization of chromatographic rare earth element separation.
Knutson, Hans-Kristian; Holmqvist, Anders; Nilsson, Bernt
2015-10-16
The importance of rare earth elements in modern technological industry grows, and as a result the interest for developing separation processes increases. This work is a part of developing chromatography as a rare earth element processing method. Process optimization is an important step in process development, and there are several competing objectives that need to be considered in a chromatographic separation process. Most studies are limited to evaluating the two competing objectives productivity and yield, and studies of scenarios with tri-objective optimizations are scarce. Tri-objective optimizations are much needed when evaluating the chromatographic separation of rare earth elements due to the importance of product pool concentration along with productivity and yield as process objectives. In this work, a multi-objective optimization strategy considering productivity, yield and pool concentration is proposed. This was carried out in the frame of a model based optimization study on a batch chromatography separation of the rare earth elements samarium, europium and gadolinium. The findings from the multi-objective optimization were used to provide with a general strategy for achieving desirable operation points, resulting in a productivity ranging between 0.61 and 0.75 kgEu/mcolumn(3), h(-1) and a pool concentration between 0.52 and 0.79 kgEu/m(3), while maintaining a purity above 99% and never falling below an 80% yield for the main target component europium. Copyright © 2015 Elsevier B.V. All rights reserved.
A Competitive and Experiential Assignment in Search Engine Optimization Strategy
ERIC Educational Resources Information Center
Clarke, Theresa B.; Clarke, Irvine, III
2014-01-01
Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method
NASA Astrophysics Data System (ADS)
Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen
2008-03-01
The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
Ghose, Sanchayita; Nagrath, Deepak; Hubbard, Brian; Brooks, Clayton; Cramer, Steven M
2004-01-01
The effect of an alternate strategy employing two different flowrates during loading was explored as a means of increasing system productivity in Protein-A chromatography. The effect of such a loading strategy was evaluated using a chromatographic model that was able to accurately predict experimental breakthrough curves for this Protein-A system. A gradient-based optimization routine is carried out to establish the optimal loading conditions (initial and final flowrates and switching time). The two-step loading strategy (using a higher flowrate during the initial stages followed by a lower flowrate) was evaluated for an Fc-fusion protein and was found to result in significant improvements in process throughput. In an extension of this optimization routine, dynamic loading capacity and productivity were simultaneously optimized using a weighted objective function, and this result was compared to that obtained with the single flowrate. Again, the dual-flowrate strategy was found to be superior.
Rand, Miya K; Shimansky, Yury P
2013-03-01
A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.
GMOtrack: generator of cost-effective GMO testing strategies.
Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana
2009-01-01
Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.
Transaction fees and optimal rebalancing in the growth-optimal portfolio
NASA Astrophysics Data System (ADS)
Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng
2011-05-01
The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
NASA Astrophysics Data System (ADS)
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
Influence of signal processing strategy in auditory abilities.
Melo, Tatiana Mendes de; Bevilacqua, Maria Cecília; Costa, Orozimbo Alves; Moret, Adriane Lima Mortari
2013-01-01
The signal processing strategy is a parameter that may influence the auditory performance of cochlear implant and is important to optimize this parameter to provide better speech perception, especially in difficult listening situations. To evaluate the individual's auditory performance using two different signal processing strategy. Prospective study with 11 prelingually deafened children with open-set speech recognition. A within-subjects design was used to compare performance with standard HiRes and HiRes 120 in three different moments. During test sessions, subject's performance was evaluated by warble-tone sound-field thresholds, speech perception evaluation, in quiet and in noise. In the silence, children S1, S4, S5, S7 showed better performance with the HiRes 120 strategy and children S2, S9, S11 showed better performance with the HiRes strategy. In the noise was also observed that some children performed better using the HiRes 120 strategy and other with HiRes. Not all children presented the same pattern of response to the different strategies used in this study, which reinforces the need to look at optimizing cochlear implant clinical programming.
Improving care coordination in the specialty referral process between primary and specialty care.
Lin, Caroline Y
2012-01-01
There is growing evidence of sub-optimal care coordination in the US. Care coordination includes the specialty referral process, which involves referral decision-making and information transfer between primary and specialty care. This article summarizes the evidence of sub-optimal care coordination in this process, as well as potential strategies to improve it.
A study of palm biomass processing strategy in Sarawak
NASA Astrophysics Data System (ADS)
Lee, S. J. Y.; Ng, W. P. Q.; Law, K. H.
2017-06-01
In the past decades, palm industry is booming due to its profitable nature. An environmental concern regarding on the palm industry is the enormous amount of waste produced from palm industry. The waste produced or palm biomass is one significant renewable energy source and raw material for value-added products like fiber mats, activated carbon, dried fiber, bio-fertilizer and et cetera in Malaysia. There is a need to establish the palm biomass industry for the recovery of palm biomass for efficient utilization and waste reduction. The development of the industry is strongly depending on the two reasons, the availability and supply consistency of palm biomass as well as the availability of palm biomass processing facilities. In Malaysia, the development of palm biomass industry is lagging due to the lack of mature commercial technology and difficult logistic planning as a result of scattered locality of palm oil mill, where palm biomass is generated. Two main studies have been carried out in this research work: i) industrial study of the feasibility of decentralized and centralized palm biomass processing in Sarawak and ii) development of a systematic and optimized palm biomass processing planning for the development of palm biomass industry in Sarawak, Malaysia. Mathematical optimization technique is used in this work to model the above case scenario for biomass processing to achieve maximum economic potential and resource feasibility. An industrial study of palm biomass processing strategy in Sarawak has been carried out to evaluate the optimality of centralized processing and decentralize processing of the local biomass industry. An optimal biomass processing strategy is achieved.
Carius, Lisa; Rumschinski, Philipp; Faulwasser, Timm; Flockerzi, Dietrich; Grammel, Hartmut; Findeisen, Rolf
2014-04-01
Microaerobic (oxygen-limited) conditions are critical for inducing many important microbial processes in industrial or environmental applications. At very low oxygen concentrations, however, the process performance often suffers from technical limitations. Available dissolved oxygen measurement techniques are not sensitive enough and thus control techniques, that can reliable handle these conditions, are lacking. Recently, we proposed a microaerobic process control strategy, which overcomes these restrictions and allows to assess different degrees of oxygen limitation in bioreactor batch cultivations. Here, we focus on the design of a control strategy for the automation of oxygen-limited continuous cultures using the microaerobic formation of photosynthetic membranes (PM) in Rhodospirillum rubrum as model phenomenon. We draw upon R. rubrum since the considered phenomenon depends on the optimal availability of mixed-carbon sources, hence on boundary conditions which make the process performance challenging. Empirically assessing these specific microaerobic conditions is scarcely practicable as such a process reacts highly sensitive to changes in the substrate composition and the oxygen availability in the culture broth. Therefore, we propose a model-based process control strategy which allows to stabilize steady-states of cultures grown under these conditions. As designing the appropriate strategy requires a detailed knowledge of the system behavior, we begin by deriving and validating an unstructured process model. This model is used to optimize the experimental conditions, and identify properties of the system which are critical for process performance. The derived model facilitates the good process performance via the proposed optimal control strategy. In summary the presented model-based control strategy allows to access and maintain microaerobic steady-states of interest and to precisely and efficiently transfer the culture from one stable microaerobic steady-state into another. Therefore, the presented approach is a valuable tool to study regulatory mechanisms of microaerobic phenomena in response to oxygen limitation alone. Biotechnol. Bioeng. 2014;111: 734-747. © 2013 Wiley Periodicals, Inc. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bellingeri, Michele; Agliari, Elena; Cassi, Davide
2015-10-01
The best strategy to immunize a complex network is usually evaluated in terms of the percolation threshold, i.e. the number of vaccine doses which make the largest connected cluster (LCC) vanish. The strategy inducing the minimum percolation threshold represents the optimal way to immunize the network. Here we show that the efficacy of the immunization strategies can change during the immunization process. This means that, if the number of doses is limited, the best strategy is not necessarily the one leading to the smallest percolation threshold. This outcome should warn about the adoption of global measures in order to evaluate the best immunization strategy.
[Imaging center - optimization of the imaging process].
Busch, H-P
2013-04-01
Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.
Optimal policy for value-based decision-making.
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-08-18
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Optimal policy for value-based decision-making
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-01-01
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638
Optimizing diffusion of an online computer tailored lifestyle program: a study protocol.
Schneider, Francine; van Osch, Liesbeth A D M; Kremers, Stef P J; Schulz, Daniela N; van Adrichem, Mathieu J G; de Vries, Hein
2011-06-20
Although the Internet is a promising medium to offer lifestyle interventions to large amounts of people at relatively low costs and effort, actual exposure rates of these interventions fail to meet the high expectations. Since public health impact of interventions is determined by intervention efficacy and level of exposure to the intervention, it is imperative to put effort in optimal dissemination. The present project attempts to optimize the dissemination process of a new online computer tailored generic lifestyle program by carefully studying the adoption process and developing a strategy to achieve sustained use of the program. A prospective study will be conducted to yield relevant information concerning the adoption process by studying the level of adoption of the program, determinants involved in adoption and characteristics of adopters and non-adopters as well as satisfied and unsatisfied users. Furthermore, a randomized control trial will be conducted to the test the effectiveness of a proactive strategy using periodic e-mail prompts in optimizing sustained use of the new program. Closely mapping the adoption process will gain insight in characteristics of adopters and non-adopters and satisfied and unsatisfied users. This insight can be used to further optimize the program by making it more suitable for a wider range of users, or to develop adjusted interventions to attract subgroups of users that are not reached or satisfied with the initial intervention. Furthermore, by studying the effect of a proactive strategy using period prompts compared to a reactive strategy to stimulate sustained use of the intervention and, possibly, behaviour change, specific recommendations on the use and the application of prompts in online lifestyle interventions can be developed. Dutch Trial Register NTR1786 and Medical Ethics Committee of Maastricht University and the University Hospital Maastricht (NL2723506809/MEC0903016).
Pittig, Andre; van den Berg, Linda; Vervliet, Bram
2016-01-01
Extinction learning is a major mechanism for fear reduction by means of exposure. Current research targets innovative strategies to enhance fear extinction and thereby optimize exposure-based treatments for anxiety disorders. This selective review updates novel behavioral strategies that may provide cutting-edge clinical implications. Recent studies provide further support for two types of enhancement strategies. Procedural enhancement strategies implemented during extinction training translate to how exposure exercises may be conducted to optimize fear extinction. These strategies mostly focus on a maximized violation of dysfunctional threat expectancies and on reducing context and stimulus specificity of extinction learning. Flanking enhancement strategies target periods before and after extinction training and inform optimal preparation and post-processing of exposure exercises. These flanking strategies focus on the enhancement of learning in general, memory (re-)consolidation, and memory retrieval. Behavioral strategies to enhance fear extinction may provide powerful clinical applications to further maximize the efficacy of exposure-based interventions. However, future replications, mechanistic examinations, and translational studies are warranted to verify long-term effects and naturalistic utility. Future directions also comprise the interplay of optimized fear extinction with (avoidance) behavior and motivational antecedents of exposure.
Energy-saving management modelling and optimization for lead-acid battery formation process
NASA Astrophysics Data System (ADS)
Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.
2017-11-01
In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebenga, J. H.; Atzema, E. H.; Boogaard, A. H. van den
Robust design of forming processes using numerical simulations is gaining attention throughout the industry. In this work, it is demonstrated how robust optimization can assist in further stretching the limits of metal forming processes. A deterministic and a robust optimization study are performed, considering a stretch-drawing process of a hemispherical cup product. For the robust optimization study, both the effect of material and process scatter are taken into account. For quantifying the material scatter, samples of 41 coils of a drawing quality forming steel have been collected. The stochastic material behavior is obtained by a hybrid approach, combining mechanical testingmore » and texture analysis, and efficiently implemented in a metamodel based optimization strategy. The deterministic and robust optimization results are subsequently presented and compared, demonstrating an increased process robustness and decreased number of product rejects by application of the robust optimization approach.« less
Bare-Bones Teaching-Learning-Based Optimization
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy.
Guo, Haiying; Xing, Yizhan; Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen; Li, Yuhong
2018-01-01
Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells.
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy
Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen
2018-01-01
Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells. PMID:29383288
Quantitative learning strategies based on word networks
NASA Astrophysics Data System (ADS)
Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng
2018-02-01
Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Exploring the quantum speed limit with computer games
NASA Astrophysics Data System (ADS)
Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.
2016-04-01
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Exploring the quantum speed limit with computer games.
Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F
2016-04-14
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity as well as seed chipping and tissue sampling approaches to facilitate genotyping. With selfing approaches, two generations of selfing rather than one for trait fixation (i.e. 'F2 enrichment' as per Bonnett et al. in Strategies for efficient implementation of molecular markers in wheat breeding. Mol Breed 15:75-85, 2005) were utilized to eliminate bottlenecking due to extremely low frequencies of desired genotypes in the population. The efficiency indicators such as total number of plants grown across generations, total number of marker data points, total number of generations, number of seeds sampled by seed chipping, number of plants requiring tissue sampling, and number of pollinations (i.e. selfing and crossing) were considered in comparisons of breeding strategies. A breeding strategy involving seed chipping and a two-generation selfing approach (SC + SELF) was determined to be the most efficient breeding strategy in terms of time to market and resource requirements. Doubled haploidy may have limited utility in trait fixation for MTI under the defined breeding scenario. This outcome paves the way for optimizing the last step in the MTI process, version testing, which involves hybridization of female and male RP conversions to create versions of the converted hybrid for performance evaluation and possible commercial release.
Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Mantay, Wayne R.
1989-01-01
Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-01-01
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115
Modeling and Advanced Control for Sustainable Process Systems
This book chapter introduces a novel process systems engineering framework that integrates process control with sustainability assessment tools for the simultaneous evaluation and optimization of process operations. The implemented control strategy consists of a biologically-insp...
Optimization of cooling strategy and seeding by FBRM analysis of batch crystallization
NASA Astrophysics Data System (ADS)
Zhang, Dejiang; Liu, Lande; Xu, Shijie; Du, Shichao; Dong, Weibing; Gong, Junbo
2018-03-01
A method is presented for optimizing the cooling strategy and seed loading simultaneously. Focused beam reflectance measurement (FBRM) was used to determine the approximating optimal cooling profile. Using these results in conjunction with constant growth rate assumption, modified Mullin-Nyvlt trajectory could be calculated. This trajectory could suppress secondary nucleation and has the potential to control product's polymorph distribution. Comparing with linear and two step cooling, modified Mullin-Nyvlt trajectory have a larger size distribution and a better morphology. Based on the calculating results, the optimized seed loading policy was also developed. This policy could be useful for guiding the batch crystallization process.
Wang, Jie-sheng; Li, Shu-xia; Gao, Jie
2014-01-01
For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.
Optimal Design of Material and Process Parameters in Powder Injection Molding
NASA Astrophysics Data System (ADS)
Ayad, G.; Barriere, T.; Gelin, J. C.; Song, J.; Liu, B.
2007-04-01
The paper is concerned with optimization and parametric identification for the different stages in Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders part by solid state diffusion. In the first part, one describes an original methodology to optimize the process and geometry parameters in injection stage based on the combination of design of experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometeric curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization of material and process parameters for manufacturing a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Improving knowledge of garlic paste greening through the design of an experimental strategy.
Aguilar, Miguel; Rincón, Francisco
2007-12-12
The furthering of scientific knowledge depends in part upon the reproducibility of experimental results. When experimental conditions are not set with sufficient precision, the resulting background noise often leads to poorly reproduced and even faulty experiments. An example of the catastrophic consequences of this background noise can be found in the design of strategies for the development of solutions aimed at preventing garlic paste greening, where reported results are contradictory. To avoid such consequences, this paper presents a two-step strategy based on the concept of experimental design. In the first step, the critical factors inherent to the problem are identified, using a 2(III)(7-4) Plackett-Burman experimental design, from a list of seven apparent critical factors (ACF); subsequently, the critical factors thus identified are considered as the factors to be optimized (FO), and optimization is performed using a Box and Wilson experimental design to identify the stationary point of the system. Optimal conditions for preventing garlic greening are examined after analysis of the complex process of green-pigment development, which involves both chemical and enzymatic reactions and is strongly influenced by pH, with an overall pH optimum of 4.5. The critical step in the greening process is the synthesis of thiosulfinates (allicin) from cysteine sulfoxides (alliin). Cysteine inhibits the greening process at this critical stage; no greening precursors are formed in the presence of around 1% cysteine. However, the optimal conditions for greening prevention are very sensitive both to the type of garlic and to manufacturing conditions. This suggests that optimal solutions for garlic greening prevention should be sought on a case-by-case basis, using the strategy presented here.
Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2012-03-01
In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.
Comparison of Two Multidisciplinary Optimization Strategies for Launch-Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Powell, R. W.; Lepsch, R. A.; Stanley, D. O.; Kroo, I. M.
1995-01-01
The investigation focuses on development of a rapid multidisciplinary analysis and optimization capability for launch-vehicle design. Two multidisciplinary optimization strategies in which the analyses are integrated in different manners are implemented and evaluated for solution of a single-stage-to-orbit launch-vehicle design problem. Weights and sizing, propulsion, and trajectory issues are directly addressed in each optimization process. Additionally, the need to maintain a consistent vehicle model across the disciplines is discussed. Both solution strategies were shown to obtain similar solutions from two different starting points. These solutions suggests that a dual-fuel, single-stage-to-orbit vehicle with a dry weight of approximately 1.927 x 10(exp 5)lb, gross liftoff weight of 2.165 x 10(exp 6)lb, and length of 181 ft is attainable. A comparison of the two approaches demonstrates that treatment or disciplinary coupling has a direct effect on optimization convergence and the required computational effort. In comparison with the first solution strategy, which is of the general form typically used within the launch vehicle design community at present, the second optimization approach is shown to he 3-4 times more computationally efficient.
NASA Astrophysics Data System (ADS)
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
2016-06-01
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
Optimal control solutions to sodic soil reclamation
NASA Astrophysics Data System (ADS)
Mau, Yair; Porporato, Amilcare
2016-05-01
We study the reclamation process of a sodic soil by irrigation with water amended with calcium cations. In order to explore the entire range of time-dependent strategies, this task is framed as an optimal control problem, where the amendment rate is the control and the total rehabilitation time is the quantity to be minimized. We use a minimalist model of vertically averaged soil salinity and sodicity, in which the main feedback controlling the dynamics is the nonlinear coupling of soil water and exchange complex, given by the Gapon equation. We show that the optimal solution is a bang-bang control strategy, where the amendment rate is discontinuously switched along the process from a maximum value to zero. The solution enables a reduction in remediation time of about 50%, compared with the continuous use of good-quality irrigation water. Because of its general structure, the bang-bang solution is also shown to work for the reclamation of other soil conditions, such as saline-sodic soils. The novelty in our modeling approach is the capability of searching the entire "strategy space" for optimal time-dependent protocols. The optimal solutions found for the minimalist model can be then fine-tuned by experiments and numerical simulations, applicable to realistic conditions that include spatial variability and heterogeneities.
NASA Astrophysics Data System (ADS)
Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.
2016-08-01
Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.
Wang, Mingyu
2006-04-01
An innovative management strategy is proposed for optimized and integrated environmental management for regional or national groundwater contamination prevention and restoration allied with consideration of sustainable development. This management strategy accounts for availability of limited resources, human health and ecological risks from groundwater contamination, costs for groundwater protection measures, beneficial uses and values from groundwater protection, and sustainable development. Six different categories of costs are identified with regard to groundwater prevention and restoration. In addition, different environmental impacts from groundwater contamination including human health and ecological risks are individually taken into account. System optimization principles are implemented to accomplish decision-makings on the optimal resources allocations of the available resources or budgets to different existing contaminated sites and projected contamination sites for a maximal risk reduction. Established management constraints such as budget limitations under different categories of costs are satisfied at the optimal solution. A stepwise optimization process is proposed in which the first step is to select optimally a limited number of sites where remediation or prevention measures will be taken, from all the existing contaminated and projected contamination sites, based on a total regionally or nationally available budget in a certain time frame such as 10 years. Then, several optimization steps determined year-by-year optimal distributions of the available yearly budgets for those selected sites. A hypothetical case study is presented to demonstrate a practical implementation of the management strategy. Several issues pertaining to groundwater contamination exposure and risk assessments and remediation cost evaluations are briefly discussed for adequately understanding implementations of the management strategy.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both the discount rate and the climatic patterns on optimal harvest strategics. In general, decreases in either the discount rate or in the frequency of favorable weather patterns lcd to a more conservative defoliation policy. This did not hold, however, for plants in states of low vigor. Optimal control for shadscale and winterfat tended to stabilize on a policy of heavy defoliation stress, followed by one or more seasons of rest. Big sagebrush required a policy of heavy summer defoliation when sufficient active shoot material is present at the beginning of the growing season. The comparison of fixed and optimal strategies indicated considerable improvement in defoliation yields when optimal strategies are followed. The superior performance was attributable to increased defoliation of plants in states of high vigor. Improvements were found for both discounted and undiscounted yields.
Leulliot, Nicolas; Trésaugues, Lionel; Bremang, Michael; Sorel, Isabelle; Ulryck, Nathalie; Graille, Marc; Aboulfath, Ilham; Poupon, Anne; Liger, Dominique; Quevillon-Cheruel, Sophie; Janin, Joël; van Tilbeurgh, Herman
2005-06-01
Crystallization has long been regarded as one of the major bottlenecks in high-throughput structural determination by X-ray crystallography. Structural genomics projects have addressed this issue by using robots to set up automated crystal screens using nanodrop technology. This has moved the bottleneck from obtaining the first crystal hit to obtaining diffraction-quality crystals, as crystal optimization is a notoriously slow process that is difficult to automatize. This article describes the high-throughput optimization strategies used in the Yeast Structural Genomics project, with selected successful examples.
Bertsimas, Dimitris; Silberholz, John; Trikalinos, Thomas
2018-03-01
Important decisions related to human health, such as screening strategies for cancer, need to be made without a satisfactory understanding of the underlying biological and other processes. Rather, they are often informed by mathematical models that approximate reality. Often multiple models have been made to study the same phenomenon, which may lead to conflicting decisions. It is natural to seek a decision making process that identifies decisions that all models find to be effective, and we propose such a framework in this work. We apply the framework in prostate cancer screening to identify prostate-specific antigen (PSA)-based strategies that perform well under all considered models. We use heuristic search to identify strategies that trade off between optimizing the average across all models' assessments and being "conservative" by optimizing the most pessimistic model assessment. We identified three recently published mathematical models that can estimate quality-adjusted life expectancy (QALE) of PSA-based screening strategies and identified 64 strategies that trade off between maximizing the average and the most pessimistic model assessments. All prescribe PSA thresholds that increase with age, and 57 involve biennial screening. Strategies with higher assessments with the pessimistic model start screening later, stop screening earlier, and use higher PSA thresholds at earlier ages. The 64 strategies outperform 22 previously published expert-generated strategies. The 41 most "conservative" ones remained better than no screening with all models in extensive sensitivity analyses. We augment current comparative modeling approaches by identifying strategies that perform well under all models, for various degrees of decision makers' conservativeness.
Build infrastructure in publishing scientific journals to benefit medical scientists
Dai, Ni; Xu, Dingyao; Zhong, Xiyao; Li, Li; Ling, Qibo
2014-01-01
There is urgent need for medical journals to optimize their publishing processes and strategies to satisfy the huge need for medical scientists to publish their articles, and then obtain better prestige and impact in scientific and research community. These strategies include optimizing the process of peer-review, utilizing open-access publishing models actively, finding ways of saving costs and getting revenue, smartly dealing with research fraud or misconduct, maintaining sound relationship with pharmaceutical companies, and managing to provide relevant and useful information for clinical practitioners and researchers. Scientists, publishers, societies and organizations need to work together to publish internationally renowned medical journals. PMID:24653634
Build infrastructure in publishing scientific journals to benefit medical scientists.
Dai, Ni; Xu, Dingyao; Zhong, Xiyao; Li, Li; Ling, Qibo; Bu, Zhaode
2014-02-01
There is urgent need for medical journals to optimize their publishing processes and strategies to satisfy the huge need for medical scientists to publish their articles, and then obtain better prestige and impact in scientific and research community. These strategies include optimizing the process of peer-review, utilizing open-access publishing models actively, finding ways of saving costs and getting revenue, smartly dealing with research fraud or misconduct, maintaining sound relationship with pharmaceutical companies, and managing to provide relevant and useful information for clinical practitioners and researchers. Scientists, publishers, societies and organizations need to work together to publish internationally renowned medical journals.
A three-level support method for smooth switching of the micro-grid operation model
NASA Astrophysics Data System (ADS)
Zong, Yuanyang; Gong, Dongliang; Zhang, Jianzhou; Liu, Bin; Wang, Yun
2018-01-01
Smooth switching of micro-grid between the grid-connected operation mode and off-grid operation mode is one of the key technologies to ensure it runs flexible and efficiently. The basic control strategy and the switching principle of micro-grid are analyzed in this paper. The reasons for the fluctuations of the voltage and the frequency in the switching process are analyzed from views of power balance and control strategy, and the operation mode switching strategy has been improved targeted. From the three aspects of controller’s current inner loop reference signal, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level security strategy for smooth switching of micro-grid operation mode is proposed. From the three aspects of controller’s current inner loop reference signal tracking, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level strategy for smooth switching of micro-grid operation mode is proposed. At last, it is proved by simulation that the proposed control strategy can make the switching process smooth and stable, the fluctuation problem of the voltage and frequency has been effectively improved.
Optimal robust control strategy of a solid oxide fuel cell system
NASA Astrophysics Data System (ADS)
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Human Information Processing and Supervisory Control.
1980-05-01
interpretation of information .............. 16 Sampling strategies .............................. 17 Speed-accuracy tradeoff ................... 23...operator is usually highly trained, and largely controls the tasks, being allowed to use what strategies he will.. Risk is incurred in ways which can...his search less than optimally effective. Hence from matters of tactics and strategy which will be discussed below, straightforward questions of
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
Cooperative optimization of reconfigurable machine tool configurations and production process plan
NASA Astrophysics Data System (ADS)
Xie, Nan; Li, Aiping; Xue, Wei
2012-09-01
The production process plan design and configurations of reconfigurable machine tool (RMT) interact with each other. Reasonable process plans with suitable configurations of RMT help to improve product quality and reduce production cost. Therefore, a cooperative strategy is needed to concurrently solve the above issue. In this paper, the cooperative optimization model for RMT configurations and production process plan is presented. Its objectives take into account both impacts of process and configuration. Moreover, a novel genetic algorithm is also developed to provide optimal or near-optimal solutions: firstly, its chromosome is redesigned which is composed of three parts, operations, process plan and configurations of RMTs, respectively; secondly, its new selection, crossover and mutation operators are also developed to deal with the process constraints from operation processes (OP) graph, otherwise these operators could generate illegal solutions violating the limits; eventually the optimal configurations for RMT under optimal process plan design can be obtained. At last, a manufacturing line case is applied which is composed of three RMTs. It is shown from the case that the optimal process plan and configurations of RMT are concurrently obtained, and the production cost decreases 6.28% and nonmonetary performance increases 22%. The proposed method can figure out both RMT configurations and production process, improve production capacity, functions and equipment utilization for RMT.
Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa
2013-04-09
Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.
The Effects of Two Study Methods on Memory
1987-07-01
hypothesis that optimal processing of individual words, qua individual words is sufficient to support good recall" (Craik & Tulving , 1975, p. 27...Likewise, mnemonic strategies, such as creating a story (Bel- lezza, Cheesman, & Reddy, 19/7) or categories (Mandler, Pearlstone , & Koopmans, 1969) for...that optimal retrieval per- formance is produced by extensive semantic processing for the individual item (Craik & Tulving , 1975; Hyde & Jenkins, 1973
Dynamics and control of DNA sequence amplification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marimuthu, Karthikeyan; Chakrabarti, Raj, E-mail: raj@pmc-group.com, E-mail: rajc@andrew.cmu.edu; Division of Fundamental Research, PMC Advanced Technology, Mount Laurel, New Jersey 08054
2014-10-28
DNA amplification is the process of replication of a specified DNA sequence in vitro through time-dependent manipulation of its external environment. A theoretical framework for determination of the optimal dynamic operating conditions of DNA amplification reactions, for any specified amplification objective, is presented based on first-principles biophysical modeling and control theory. Amplification of DNA is formulated as a problem in control theory with optimal solutions that can differ considerably from strategies typically used in practice. Using the Polymerase Chain Reaction as an example, sequence-dependent biophysical models for DNA amplification are cast as control systems, wherein the dynamics of the reactionmore » are controlled by a manipulated input variable. Using these control systems, we demonstrate that there exists an optimal temperature cycling strategy for geometric amplification of any DNA sequence and formulate optimal control problems that can be used to derive the optimal temperature profile. Strategies for the optimal synthesis of the DNA amplification control trajectory are proposed. Analogous methods can be used to formulate control problems for more advanced amplification objectives corresponding to the design of new types of DNA amplification reactions.« less
NASA Astrophysics Data System (ADS)
Ayad, G.; Song, J.; Barriere, T.; Liu, B.; Gelin, J. C.
2007-05-01
The paper is concerned with optimization and parametric identification of Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders parts by solid state diffusion. In the first part, one describes an original methodology to optimize the injection stage based on the combination of Design Of Experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometer curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization for manufacturing of a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
Maximizing the efficiency of multienzyme process by stoichiometry optimization.
Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-09-05
Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ahmadi, Mohammad H.; Amin Nabakhteh, Mohammad; Ahmadi, Mohammad-Ali; Pourfayaz, Fathollah; Bidi, Mokhtar
2017-10-01
The motivation behind this work is to explore a nanoscale irreversible Stirling refrigerator with respect to size impacts and shows two novel thermo-ecological criteria. Two distinct strategies were suggested in the optimization process and the consequences of every strategy were examined independently. In the primary strategy, with the purpose of maximizing the energetic sustainability index and modified the ecological coefficient of performance (MECOP) and minimizing the dimensionless Ecological function, a multi-objective optimization algorithm (MOEA) was used. In the second strategy, with the purpose of maximizing the ECOP and MECOP and minimizing the dimensionless Ecological function, a MOEA was used. To conclude the final solution from each strategy, three proficient decision makers were utilized. Additionally, to quantify the deviation of the results gained from each decision makers, two different statistical error indexes were employed. Finally, based on the comparison between the results achieved from proposed scenarios reveals that by maximizing the MECOP the maximum values of ESI, ECOP, and a minimum of ecfare achieved.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Salehi, Mojtaba
2010-01-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
[Cost-effectiveness of breast cancer screening policies in Mexico].
Valencia-Mendoza, Atanacio; Sánchez-González, Gilberto; Bautista-Arredondo, Sergio; Torres-Mejía, Gabriela; Bertozzi, Stefano M
2009-01-01
Generate cost-effectiveness information to allow policy makers optimize breast cancer (BC) policy in Mexico. We constructed a Markov model that incorporates four interrelated processes of the disease: the natural history; detection using mammography; treatment; and other competing-causes mortality, according to which 13 different strategies were modeled. Strategies (starting age, % of coverage, frequency in years)= (48, 25, 2), (40, 50, 2) and (40, 50, 1) constituted the optimal method for expanding the BC program, yielding 75.3, 116.4 and 171.1 thousand pesos per life-year saved, respectively. The strategies included in the optimal method for expanding the program produce a cost per life-year saved of less than two times the GNP per capita and hence are cost-effective according to WHO Commission on Macroeconomics and Health criteria.
He, Jianlong; Zhang, Wenbo; Liu, Xiaoyan; Xu, Ning; Xiong, Peng
2016-11-01
Ethanol is a very important industrial chemical. In order to improve ethanol productivity using Saccharomyces cerevisiae in fermentation from furfural process residue, we developed a process of simultaneous saccharification and fermentation (SSF) of furfural process residue, optimizing prehydrolysis cellulase loading concentration, prehydrolysis time, and substrate feeding strategy. The ethanol concentration obtained from the optimized process was 19.3 g/L, corresponding 76.5% ethanol yield, achieved by running SSF for 48 h from 10% furfural process residue with prehydrolysis at 50°C for 4 h and cellulase loading of 15 FPU/g furfural process residue. For higher ethanol concentrations, fed-batch fermentation was performed. The optimized fed-batch process increased the ethanol concentration to 37.6 g/L, 74.5% yield, obtained from 10% furfural process residue with two additions of 5% substrate at 12 and 24 h. Copyright © 2016 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
An optimal routing strategy on scale-free networks
NASA Astrophysics Data System (ADS)
Yang, Yibo; Zhao, Honglin; Ma, Jinlong; Qi, Zhaohui; Zhao, Yongbin
Traffic is one of the most fundamental dynamical processes in networked systems. With the traditional shortest path routing (SPR) protocol, traffic congestion is likely to occur on the hub nodes on scale-free networks. In this paper, we propose an improved optimal routing (IOR) strategy which is based on the betweenness centrality and the degree centrality of nodes in the scale-free networks. With the proposed strategy, the routing paths can accurately bypass hub nodes in the network to enhance the transport efficiency. Simulation results show that the traffic capacity as well as some other indexes reflecting transportation efficiency are further improved with the IOR strategy. Owing to the significantly improved traffic performance, this study is helpful to design more efficient routing strategies in communication or transportation systems.
Suárez Riveiro, José Manuel
2014-01-01
In addition to cognitive and behavioral strategies, students can also use affective-motivational strategies to facilitate their learning process. In this way, the strategies of defensive-pessimism and generation of positive expectations have been widely related to conceptual models of pessimism-optimism. The aim of this study was to describe the use of these strategies in 1753 secondary school students, and to study the motivational and strategic characteristics which differentiated between the student typologies identified as a result of their use. The results indicated a higher use of the generation of positive expectations strategy (optimism) (M = 3.40, SD = .78) than the use of the defensive pessimism strategy (M = 3.00, SD = .78); a positive and significant correlation between the two strategies (r = .372, p = .001); their relationship with adequate academic motivation and with the use of learning strategies. Furthermore, four student typologies were identified based on the use of both strategies. Lastly, we propose a new approach for future work in this line of research.
On optimal dividends: From reflection to refraction
NASA Astrophysics Data System (ADS)
Gerber, Hans U.; Shiu, Elias S. W.
2006-02-01
The problem goes back to a paper that Bruno de Finetti presented to the International Congress of Actuaries in New York (1957). In a stock company that is involved in risky business, what is the optimal dividend strategy, that is, what is the strategy that maximizes the expectation of the discounted dividends (until possible ruin) to the shareholders? Jeanblanc-Picque and Shiryaev [Russian Math. Surveys 20 (1995) 257-277] and Asmussen and Taksar [Insurance: Math. Econom. 20 (1997) 1-15] solved the problem by modeling the income process of the company by a Wiener process and imposing the condition of a bounded dividend rate. Here, we present some down-to-earth calculations in this context.
Tool path strategy and cutting process monitoring in intelligent machining
NASA Astrophysics Data System (ADS)
Chen, Ming; Wang, Chengdong; An, Qinglong; Ming, Weiwei
2018-06-01
Intelligent machining is a current focus in advanced manufacturing technology, and is characterized by high accuracy and efficiency. A central technology of intelligent machining—the cutting process online monitoring and optimization—is urgently needed for mass production. In this research, the cutting process online monitoring and optimization in jet engine impeller machining, cranio-maxillofacial surgery, and hydraulic servo valve deburring are introduced as examples of intelligent machining. Results show that intelligent tool path optimization and cutting process online monitoring are efficient techniques for improving the efficiency, quality, and reliability of machining.
77 FR 26736 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-07
... an Internet Push methodology, in an effort to obtain early response rate indicators for the 2020... contact strategies involving optimizing the Internet push strategy are proposed, such as implementing... reducing and/or eliminating back-end processing. Affected Public: Individuals or households. Frequency: One...
Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick
2014-12-01
As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Optimal strategies for electric energy contract decision making
NASA Astrophysics Data System (ADS)
Song, Haili
2000-10-01
The power industry restructuring in various countries in recent years has created an environment where trading of electric energy is conducted in a market environment. In such an environment, electric power companies compete for the market share through spot and bilateral markets. Being profit driven, electric power companies need to make decisions on spot market bidding, contract evaluation, and risk management. New methods and software tools are required to meet these upcoming needs. In this research, bidding strategy and contract pricing are studied from a market participant's viewpoint; new methods are developed to guide a market participant in spot and bilateral market operation. A supplier's spot market bidding decision is studied. Stochastic optimization is formulated to calculate a supplier's optimal bids in a single time period. This decision making problem is also formulated as a Markov Decision Process. All the competitors are represented by their bidding parameters with corresponding probabilities. A systematic method is developed to calculate transition probabilities and rewards. The optimal strategy is calculated to maximize the expected reward over a planning horizon. Besides the spot market, a power producer can also trade in the bilateral markets. Bidding strategies in a bilateral market are studied with game theory techniques. Necessary and sufficient conditions of Nash Equilibrium (NE) bidding strategy are derived based on the generators' cost and the loads' willingness to pay. The study shows that in any NE, market efficiency is achieved. Furthermore, all Nash equilibria are revenue equivalent for the generators. The pricing of "Flexible" contracts, which allow delivery flexibility over a period of time with a fixed total amount of electricity to be delivered, is analyzed based on the no-arbitrage pricing principle. The proposed algorithm calculates the price based on the optimality condition of the stochastic optimization formulation. Simulation examples illustrate the tradeoffs between prices and scheduling flexibility. Spot bidding and contract pricing are not independent decision processes. The interaction between spot bidding and contract evaluation is demonstrated with game theory equilibrium model and market simulation results. It leads to the conclusion that a market participant's contract decision making needs to be further investigated as an integrated optimization formulation.
Anderson, D.R.
1974-01-01
Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.
NASA Astrophysics Data System (ADS)
Cai, Xiaohui; Liu, Yang; Ren, Zhiming
2018-06-01
Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.
NASA Astrophysics Data System (ADS)
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
Xu, Sen; Hoshan, Linda; Chen, Hao
2016-11-01
In this study, we discussed the development and optimization of an intensified CHO culture process, highlighting medium and control strategies to improve lactate metabolism. A few strategies, including supplementing glucose with other sugars (fructose, maltose, and galactose), controlling glucose level at <0.2 mM, and supplementing medium with copper sulfate, were found to be effective in reducing lactate accumulation. Among them, copper sulfate supplementation was found to be critical for process optimization when glucose was in excess. When copper sulfate was supplemented in the new process, two-fold increase in cell density (66.5 ± 8.4 × 10(6) cells/mL) and titer (11.9 ± 0.6 g/L) was achieved. Productivity and product quality attributes differences between batch, fed-batch, and concentrated fed-batch cultures were discussed. The importance of process and cell metabolism understanding when adapting the existing process to a new operational mode was demonstrated in the study.
Li, Zheng; Qi, Rong; Wang, Bo; Zou, Zhe; Wei, Guohong; Yang, Min
2013-01-01
A full-scale oxidation ditch process for treating sewage was simulated with the ASM2d model and optimized for minimal cost with acceptable performance in terms of ammonium and phosphorus removal. A unified index was introduced by integrating operational costs (aeration energy and sludge production) with effluent violations for performance evaluation. Scenario analysis showed that, in comparison with the baseline (all of the 9 aerators activated), the strategy of activating 5 aerators could save aeration energy significantly with an ammonium violation below 10%. Sludge discharge scenario analysis showed that a sludge discharge flow of 250-300 m3/day (solid retention time (SRT), 13-15 days) was appropriate for the enhancement of phosphorus removal without excessive sludge production. The proposed optimal control strategy was: activating 5 rotating disks operated with a mode of "111100100" ("1" represents activation and "0" represents inactivation) for aeration and sludge discharge flow of 200 m3/day (SRT, 19 days). Compared with the baseline, this strategy could achieve ammonium violation below 10% and TP violation below 30% with substantial reduction of aeration energy cost (46%) and minimal increment of sludge production (< 2%). This study provides a useful approach for the optimization of process operation and control.
Computer-aided design for metabolic engineering.
Fernández-Castané, Alfred; Fehér, Tamás; Carbonell, Pablo; Pauthenier, Cyrille; Faulon, Jean-Loup
2014-12-20
The development and application of biotechnology-based strategies has had a great socio-economical impact and is likely to play a crucial role in the foundation of more sustainable and efficient industrial processes. Within biotechnology, metabolic engineering aims at the directed improvement of cellular properties, often with the goal of synthesizing a target chemical compound. The use of computer-aided design (CAD) tools, along with the continuously emerging advanced genetic engineering techniques have allowed metabolic engineering to broaden and streamline the process of heterologous compound-production. In this work, we review the CAD tools available for metabolic engineering with an emphasis, on retrosynthesis methodologies. Recent advances in genetic engineering strategies for pathway implementation and optimization are also reviewed as well as a range of bionalytical tools to validate in silico predictions. A case study applying retrosynthesis is presented as an experimental verification of the output from Retropath, the first complete automated computational pipeline applicable to metabolic engineering. Applying this CAD pipeline, together with genetic reassembly and optimization of culture conditions led to improved production of the plant flavonoid pinocembrin. Coupling CAD tools with advanced genetic engineering strategies and bioprocess optimization is crucial for enhanced product yields and will be of great value for the development of non-natural products through sustainable biotechnological processes. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Khamidullin, R. I.
2018-05-01
The paper is devoted to milestones of the optimal mathematical model for a business process related to cost estimate documentation compiled during construction and reconstruction of oil and gas facilities. It describes the study and analysis of fundamental issues in petroleum industry, which are caused by economic instability and deterioration of a business strategy. Business process management is presented as business process modeling aimed at the improvement of the studied business process, namely main criteria of optimization and recommendations for the improvement of the above-mentioned business model.
Wang, Xi-fen; Zhou, Huai-chun
2005-01-01
The control of 3-D temperature distribution in a utility boiler furnace is essential for the safe, economic and clean operation of pc-fired furnace with multi-burner system. The development of the visualization of 3-D temperature distributions in pc-fired furnaces makes it possible for a new combustion control strategy directly with the furnace temperature as its goal to improve the control quality for the combustion processes. Studied in this paper is such a new strategy that the whole furnace is divided into several parts in the vertical direction, and the average temperature and its bias from the center in every cross section can be extracted from the visualization results of the 3-D temperature distributions. In the simulation stage, a computational fluid dynamics (CFD) code served to calculate the 3-D temperature distributions in a furnace, then a linear model was set up to relate the features of the temperature distributions with the input of the combustion processes, such as the flow rates of fuel and air fed into the furnaces through all the burners. The adaptive genetic algorithm was adopted to find the optimal combination of the whole input parameters which ensure to form an optimal 3-D temperature field in the furnace desired for the operation of boiler. Simulation results showed that the strategy could soon find the factors making the temperature distribution apart from the optimal state and give correct adjusting suggestions.
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
A framework for designing and analyzing binary decision-making strategies in cellular systems†
Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.
2015-01-01
Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552
Optimization of Visual Information Presentation for Visual Prosthesis.
Guo, Fei; Yang, Yuan; Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.
Optimization of Visual Information Presentation for Visual Prosthesis
Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769
Design optimization of a prescribed vibration system using conjoint value analysis
NASA Astrophysics Data System (ADS)
Malinga, Bongani; Buckner, Gregory D.
2016-12-01
This article details a novel design optimization strategy for a prescribed vibration system (PVS) used to mechanically filter solids from fluids in oil and gas drilling operations. A dynamic model of the PVS is developed, and the effects of disturbance torques are detailed. This model is used to predict the effects of design parameters on system performance and efficiency, as quantified by system attributes. Conjoint value analysis, a statistical technique commonly used in marketing science, is utilized to incorporate designer preferences. This approach effectively quantifies and optimizes preference-based trade-offs in the design process. The effects of designer preferences on system performance and efficiency are simulated. This novel optimization strategy yields improvements in all system attributes across all simulated vibration profiles, and is applicable to other industrial electromechanical systems.
Kramers problem in evolutionary strategies
NASA Astrophysics Data System (ADS)
Dunkel, J.; Ebeling, W.; Schimansky-Geier, L.; Hänggi, P.
2003-06-01
We calculate the escape rates of different dynamical processes for the case of a one-dimensional symmetric double-well potential. In particular, we compare the escape rates of a Smoluchowski process, i.e., a corresponding overdamped Brownian motion dynamics in a metastable potential landscape, with the escape rates obtained for a biologically motivated model known as the Fisher-Eigen process. The main difference between the two models is that the dynamics of the Smoluchowski process is determined by local quantities, whereas the Fisher-Eigen process is based on a global coupling (nonlocal interaction). If considered in the context of numerical optimization algorithms, both processes can be interpreted as archetypes of physically or biologically inspired evolutionary strategies. In this sense, the results discussed in this work are utile in order to evaluate the efficiency of such strategies with regard to the problem of surmounting various barriers. We find that a combination of both scenarios, starting with the Fisher-Eigen strategy, provides a most effective evolutionary strategy.
Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion
Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317
Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.
Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.
Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin
2012-06-10
The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.
Memory and Study Strategies for Optimal Learning.
ERIC Educational Resources Information Center
Hamachek, Alice L.
Study strategies are those specific reading skills that increase understanding, memory storage, and retrieval. Memory techniques are crucial to effective studying, and to subsequent performance in class and on written examinations. A major function of memory is to process information. Stimuli are picked up by sensory receptors and transferred to…
Marsot, Maud; Rautureau, Séverine; Dufour, Barbara; Durand, Benoit
2014-01-01
Comparison of control strategies against animal infectious diseases allows determining optimal strategies according to their epidemiological and/or economic impacts. However, in real life, the choice of a control strategy does not always obey a pure economic or epidemiological rationality. The objective of this study was to analyze the choice of a foot and mouth disease (FMD) control strategy as a decision-making process in which the decision-maker is influenced by several stakeholders (government, agro-food industries, public opinion). For each of these, an indicator of epizootic impact was quantified to compare seven control strategies. We then determined how, in France, the optimal control strategy varied according to the relative weights of stakeholders and to the perception of risk by the decision-maker (risk-neutral/risk-averse). When the scope of decision was national, whatever their perception of risk and the stakeholders' weights, decision-makers chose a strategy based on vaccination. This consensus concealed marked differences between regions, which were connected with the regional breeding characteristics. Vaccination-based strategies were predominant in regions with dense cattle and swine populations, and in regions with a dense population of small ruminants, combined with a medium density of cattle and swine. These differences between regions suggested that control strategies could be usefully adapted to local breeding conditions. We then analyzed the feasibility of adaptive decision-making processes depending on the date and place where the epizootic starts, or on the evolution of the epizootic over time. The initial conditions always explained at least half of the variance of impacts, the remaining variance being attributed to the variability of epizootics evolution. However, the first weeks of this evolution explained a large part of the impacts variability. Although the predictive value of the initial conditions for determining the optimal strategy was weak, adaptive strategies changing dynamically according to the evolution of the epizootic appeared feasible.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Plasma Enhanced Growth of Carbon Nanotubes For Ultrasensitive Biosensors
NASA Technical Reports Server (NTRS)
Cassell, Alan M.; Meyyappan, M.
2004-01-01
The multitude of considerations facing nanostructure growth and integration lends itself to combinatorial optimization approaches. Rapid optimization becomes even more important with wafer-scale growth and integration processes. Here we discuss methodology for developing plasma enhanced CVD growth techniques for achieving individual, vertically aligned carbon nanostructures that show excellent properties as ultrasensitive electrodes for nucleic acid detection. We utilize high throughput strategies for optimizing the upstream and downstream processing and integration of carbon nanotube electrodes as functional elements in various device types. An overview of ultrasensitive carbon nanotube based sensor arrays for electrochemical bio-sensing applications and the high throughput methodology utilized to combine novel electrode technology with conventional MEMS processing will be presented.
Plasma Enhanced Growth of Carbon Nanotubes For Ultrasensitive Biosensors
NASA Technical Reports Server (NTRS)
Cassell, Alan M.; Li, J.; Ye, Q.; Koehne, J.; Chen, H.; Meyyappan, M.
2004-01-01
The multitude of considerations facing nanostructure growth and integration lends itself to combinatorial optimization approaches. Rapid optimization becomes even more important with wafer-scale growth and integration processes. Here we discuss methodology for developing plasma enhanced CVD growth techniques for achieving individual, vertically aligned carbon nanostructures that show excellent properties as ultrasensitive electrodes for nucleic acid detection. We utilize high throughput strategies for optimizing the upstream and downstream processing and integration of carbon nanotube electrodes as functional elements in various device types. An overview of ultrasensitive carbon nanotube based sensor arrays for electrochemical biosensing applications and the high throughput methodology utilized to combine novel electrode technology with conventional MEMS processing will be presented.
NASA Astrophysics Data System (ADS)
Vongsaysy, Uyxing; Bassani, Dario M.; Servant, Laurent; Pavageau, Bertrand; Wantz, Guillaume; Aziz, Hany
2014-01-01
Polymeric bulk heterojunction (BHJ) organic solar cells represent one of the most promising technologies for renewable energy with a low fabrication cost. Control over BHJ morphology is one of the key factors in obtaining high-efficiency devices. This review focuses on formulation strategies for optimizing the BHJ morphology. We address how solvent choice and the introduction of processing additives affect the morphology. We also review a number of recent studies concerning prediction methods that utilize the Hansen solubility parameters to develop efficient solvent systems.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
A general-purpose optimization program for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Sugimoto, H.
1986-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.
Raut, Sangeeta; Raut, Smita; Sharma, Manisha; Srivastav, Chaitanya; Adhikari, Basudam; Sen, Sudip Kumar
2015-09-01
In the present study, artificial neural network (ANN) modelling coupled with particle swarm optimization (PSO) algorithm was used to optimize the process variables for enhanced low density polyethylene (LDPE) degradation by Curvularia lunata SG1. In the non-linear ANN model, temperature, pH, contact time and agitation were used as input variables and polyethylene bio-degradation as the output variable. Further, on application of PSO to the ANN model, the optimum values of the process parameters were as follows: pH = 7.6, temperature = 37.97 °C, agitation rate = 190.48 rpm and incubation time = 261.95 days. A comparison between the model results and experimental data gave a high correlation coefficient ([Formula: see text]). Significant enhancement of LDPE bio-degradation using C. lunata SG1by about 48 % was achieved under optimum conditions. Thus, the novelty of the work lies in the application of combination of ANN-PSO as optimization strategy to enhance the bio-degradation of LDPE.
A systematic review on the composting of green waste: Feedstock quality and optimization strategies.
Reyes-Torres, M; Oviedo-Ocaña, E R; Dominguez, I; Komilis, D; Sánchez, A
2018-04-27
Green waste (GW) is an important fraction of municipal solid waste (MSW). The composting of lignocellulosic GW is challenging due to its low decomposition rate. Recently, an increasing number of studies that include strategies to optimize GW composting appeared in the literature. This literature review focuses on the physicochemical quality of GW and on the effect of strategies used to improve the process and product quality. A systematic search was carried out, using keywords, and 447 papers published between 2002 and 2018 were identified. After a screening process, 41 papers addressing feedstock quality and 32 papers on optimization strategies were selected to be reviewed and analyzed in detail. The GW composition is highly variable due to the diversity of the source materials, the type of vegetation, and climatic conditions. This variability limits a strict categorization of the GW physicochemical characteristics. However, this research established that the predominant features of GW are a C/N ratio higher than 25, a deficit in important nutrients, namely nitrogen (0.5-1.5% db), phosphorous (0.1-0.2% db) and potassium (0.4-0.8% db) and a high content of recalcitrant organic compounds (e.g. lignin). The promising strategies to improve composting of GW were: i) GW particle size reduction (e.g. shredding and separation of GW fractions); ii) addition of energy amendments (e.g. non-refined sugar, phosphate rock, food waste, volatile ashes), bulking materials (e.g. biocarbon, wood chips), or microbial inoculum (e.g. fungal consortia); and iii) variations in operating parameters (aeration, temperature, and two-phase composting). These alternatives have successfully led to the reduction of process length and have managed to transform recalcitrant substances to a high-quality end-product. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimization of Transmon Qubit Fabrication
NASA Astrophysics Data System (ADS)
Chang, Josephine; Rothwell, Mary; Keefe, George; IBM Quantum Computing Group Team
2013-03-01
Rapid advances in the field of superconducting transmon qubits have refined our understanding of the role that substrate and interfaces play in qubit decoherence. Here, we review strategies for enhancing coherence times in both 2D and 3D transmon qubits through substrate design, structural improvements, and process optimization. Results correlating processing techniques to decoherence times are presented, and some novel structures are proposed for further consideration. We acknowledge support from IARPA under contract W911NF-10-1-0324
Magnetic manipulation device for the optimization of cell processing conditions.
Ito, Hiroshi; Kato, Ryuji; Ino, Kosuke; Honda, Hiroyuki
2010-02-01
Variability in human cell phenotypes make it's advancements in optimized cell processing necessary for personalized cell therapy. Here we propose a strategy of palm-top sized device to assist physically manipulating cells for optimizing cell preparations. For the design of such a device, we combined two conventional approaches: multi-well plate formatting and magnetic cell handling using magnetite cationic liposomes (MCLs). From our previous works, we showed the labeling applications of MCL on adhesive cells for various tissue engineering approaches. To feasibly transfer cells in multi-well plate, we here evaluated the magnetic response of MCL-labeled suspension type cells. The cell handling performance of Jurkat cells proved to be faster and more robust compared to MACS (Magnetic Cell Sorting) bead methods. To further confirm our strategy, prototype palm-top sized device "magnetic manipulation device (MMD)" was designed. In the device, the actual cell transportation efficacy of Jurkat cells was satisfying. Moreover, as a model of the most distributed clinical cell processing, primary peripheral blood mononuclear cells (PBMCs) from different volunteers were evaluated. By MMD, individual PBMCs indicated to have optimum Interleukin-2 (IL-2) concentrations for the expansion. Such huge differences of individual cells indicated that MMD, our proposing efficient and self-contained support tool, could assist the feasible and cost-effective optimization of cell processing in clinical facilities. Copyright (c) 2009 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Optimal switching between geocentric and egocentric strategies in navigation
Mahadevan, L.
2016-01-01
Animals use a combination of egocentric navigation driven by the internal integration of environmental cues, interspersed with geocentric course correction and reorientation. These processes are accompanied by uncertainty in sensory acquisition of information, planning and execution. Inspired by observations of dung beetle navigational strategies that show switching between geocentric and egocentric strategies, we consider the question of optimal reorientation rates for the navigation of an agent moving along a preferred direction in the presence of multiple sources of noise. We address this using a model that takes the form of a correlated random walk at short time scales that is punctuated by reorientation events leading to a biased random walks at long time scales. This allows us to identify optimal alternation schemes and characterize their robustness in the context of noisy sensory acquisition as well as performance errors linked with variations in environmental conditions and agent–environment interactions. PMID:27493769
A strategy to optimize the thermoelectric performance in a spark plasma sintering process
Chiu, Wan-Ting; Chen, Cheng-Lung; Chen, Yang-Yuan
2016-01-01
Spark plasma sintering (SPS) is currently widely applied to existing alloys as a means of further enhancing the alloys’ figure of merit. However, the determination of the optimal sintering condition is challenging in the SPS process. This report demonstrates a systematic way to independently optimize the Seebeck coefficient S and the ratio of electrical to thermal conductivity (σ/κ) and thus achieve the maximum figure of merit zT = S2(σ/κ)T. Sb2−xInxTe3 (x = 0–0.2) were chosen as examples to validate the method. Although high sintering temperature and pressure are helpful in enhancing the compactness and electrical conductivity of pressed samples, the resultant deteriorated Seebeck coefficient and increasing thermal conductivity eventually offset the benefit. We found that the optimal sintering temperature coincides with temperatures at which the maximum Seebeck coefficient begins to degrade, whereas the optimal sintering pressure coincided with the pressure at which the σ/κ ratio reaches a maximum. Based on this principle, the optimized sintering conditions were determined, and the zT of Sb1.9In0.1Te3 is raised to 0.92 at 600 K, showing an approximately 84% enhancement. This work develops a facile strategy for selecting the optimal SPS sintering condition to further enhance the zT of bulk specimens. PMID:26975209
Optimization of robustness of interdependent network controllability by redundant design
2018-01-01
Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy) or DBS (degree based strategy) for node backup and HDF(high degree first) for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability. PMID:29438426
Agile, a guiding principle for health care improvement?
Tolf, Sara; Nyström, Monica E; Tishelman, Carol; Brommels, Mats; Hansson, Johan
2015-01-01
The purpose of this paper is to contribute to increased understanding of the concept agile and its potential for hospital managers to optimize design of organizational structures and processes to combine internal efficiency and external effectiveness. An integrative review was conducted using the reSEARCH database. Articles met the following criteria: first, a definition of agility; second, descriptions of enablers of becoming an agile organization; and finally, discussions of agile on multiple organizational levels. In total, 60 articles qualified for the final analysis. Organizational agility rests on the assumption that the environment is uncertain, ranging from frequently changing to highly unpredictable. Proactive, reactive or embracive coping strategies were described as possible ways to handle such uncertain environments. Five organizational capacities were derived as necessary for hospitals to use the strategies optimally: transparent and transient inter-organizational links; market sensitivity and customer focus; management by support for self-organizing employees; organic structures that are elastic and responsive; flexible human and resource capacity for timely delivery. Agile is portrayed as either the "new paradigm" following lean, the needed development on top of a lean base, or as complementary to lean in distinct hybrid strategies. Environmental uncertainty needs to be matched with coping strategies and organizational capacities to design processes responsive to real needs of health care. This implies that lean and agile can be combined to optimize the design of hospitals, to meet different variations in demand and create good patient management. While considerable value has been paid to strategies to improve the internal efficiency within hospitals, this review raise the attention to the value of strategies of external effectiveness.
Strategie de commande optimale de la production electrique dans un site isole
NASA Astrophysics Data System (ADS)
Barris, Nicolas
Hydro-Quebec manages more than 20 isolated power grids all over the province. The grids are located in small villages where the electricity demand is rather small. Those villages being far away from each other and from the main electricity production facilities, energy is produced locally using diesel generators. Electricity production costs at the isolated power grids are very important due to elevated diesel prices and transportation costs. However, the price of electricity is the same for the entire province, with no regards to the production costs of the electricity consumed. These two factors combined result in yearly exploitation losses for Hydro-Quebec. For any given village, several diesel generators are required to satisfy the demand. When the load increases, it becomes necessary to increase the capacity either by adding a generator to the production or by switching to a more powerful generator. The same thing happens when the load decreases. Every decision regarding changes in the production is included in the control strategy, which is based on predetermined parameters. These parameters were specified according to empirical studies and the knowledge base of the engineers managing the isolated power grids, but without any optimisation approach. The objective of the presented work is to minimize the diesel consumption by optimizing the parameters included in the control strategy. Its impact would be to limit the exploitation losses generated by the isolated power grids and the CO2 equivalent emissions without adding new equipment or completely changing the nature of the strategy. To satisfy this objective, the isolated power grid simulator OPERA is used along with the optimization library NOMAD and the data of three villages in northern Quebec. The preliminary optimization instance for the first village showed that some modifications to the existing control strategy must be done to better achieve the minimization objective. The main optimization processes consist of three different optimization approaches: the optimization of one set of parameters for all the villages, the optimization of one set of parameters per village, and the optimization of one set of parameters per diesel generator configuration per village. In the first scenario, the optimization of one set of parameters for all the villages leads to compromises for all three villages without allowing a full potential reduction for any village. Therefore, it is proven that applying one set of parameters to all the villages is not suitable for finding an optimal solution. In the second scenario, the optimization of one set of parameters per village allows an improvement over the previous results. At this point, it is shown that it is crucial to remove from the production the less efficient configurations when they are next to more efficient configurations. In the third scenario, the optimization of one set of parameters per configuration per village requires a very large number of function evaluations but does not result in any satisfying solution. In order to improve the performance of the optimization, it has been decided that the problem structure would be used. Two different approaches are considered: optimizing one set of parameters at a time and optimizing different rules included in the control strategy one at a time. In both cases, results are similar but calculation costs differ, the second method being much more cost efficient. The optimal values of the ultimate rules parameters can be directly linked to the efficient transition points that favor an efficient operation of the isolated power grids. Indeed, these transition points are defined in such a way that the high efficiency zone of every configuration is used. Therefore, it seems possible to directly identify on the graphs these optimal transition points and define the parameters in the control strategy without even having to run any optimization process. The diesel consumption reduction for all three villages is about 1.9%. Considering elevated diesel costs and the existence of about 20 other isolated power grids, the use of the developed methods together with a calibration of OPERA would allow a substantial reduction of Hydro-Quebec's annual deficit. Also, since one of the developed methods is very cost effective and produces equivalent results, it could be possible to use it during other processes; for example, when buying new equipment for the grid it could be possible to assess its full potential, under an optimized control strategy, and improve the net present value.
Process development for the mass production of Ehrlichia ruminantium.
Marcelino, Isabel; Sousa, Marcos F Q; Veríssimo, Célia; Cunha, António E; Carrondo, Manuel J T; Alves, Paula M
2006-03-06
This work describes the optimization of a cost-effective process for the production of an inactivated bacterial vaccine against heartwater and the first attempt to produce the causative agent of this disease, the rickettsia Ehrlichia ruminantium (ER), using stirred tanks. In vitro, it is possible to produce ER using cultures of ruminant endothelial cells. Herein, mass production of these cells was optimized for stirring conditions. The effect of inoculum size, microcarrier type, concentration of serum at inoculation time and agitation rate upon maximum cell concentration were evaluated. Several strategies for the scale-up of cell inoculum were also tested. Afterwards, using the optimized parameters for cell growth, ER production in stirred tanks was validated for two ER strains (Gardel and Welgevonden). Critical parameters related with the infection strategy such as serum concentration at infection time, multiplicity and time of infection, and medium refeed strategy were analyzed. The results indicate that it is possible to produce ER in stirred tank bioreactors, under serum-free culture conditions, reaching a 6.5-fold increase in ER production yields. The suitability of this process was validated up to a 2-l scale and a preliminary cost estimation has shown that the stirred tanks are the least expensive culture method. Overall, these results are crucial to define a scaleable and fully controlled process for the production of a heartwater vaccine and open "new avenues" for the production of vaccines against other ehrlichial species, with emerging impact in human and animal health.
The topography of the environment alters the optimal search strategy for active particles
Volpe, Giovanni
2017-01-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher’s size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers’ taxis to biochemical rates of reaction. PMID:29073055
The topography of the environment alters the optimal search strategy for active particles
NASA Astrophysics Data System (ADS)
Volpe, Giorgio; Volpe, Giovanni
2017-10-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher's size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers' taxis to biochemical rates of reaction.
Chou, Ann F; Yano, Elizabeth M; McCoy, Kimberly D; Willis, Deanna R; Doebbeling, Bradley N
2008-01-01
To address increases in the incidence of infection with antimicrobial-resistant pathogens, the National Foundation for Infectious Diseases and Centers for Disease Control and Prevention proposed two sets of strategies to (a) optimize antibiotic use and (b) prevent the spread of antimicrobial resistance and control transmission. However, little is known about the implementation of these strategies. Our objective is to explore organizational structural and process factors that facilitate the implementation of National Foundation for Infectious Diseases/Centers for Disease Control and Prevention strategies in U.S. hospitals. We surveyed 448 infection control professionals from a national sample of hospitals. Clinically anchored in the Donabedian model that defines quality in terms of structural and process factors, with the structural domain further informed by a contingency approach, we modeled the degree to which National Foundation for Infectious Diseases and Centers for Disease Control and Prevention strategies were implemented as a function of formalization and standardization of protocols, centralization of decision-making hierarchy, information technology capabilities, culture, communication mechanisms, and interdepartmental coordination, controlling for hospital characteristics. Formalization, standardization, centralization, institutional culture, provider-management communication, and information technology use were associated with optimal antibiotic use and enhanced implementation of strategies that prevent and control antimicrobial resistance spread (all p < .001). However, interdepartmental coordination for patient care was inversely related with antibiotic use in contrast to antimicrobial resistance spread prevention and control (p < .0001). Formalization and standardization may eliminate staff role conflict, whereas centralized authority may minimize ambiguity. Culture and communication likely promote internal trust, whereas information technology use helps integrate and support these organizational processes. These findings suggest concrete strategies for evaluating current capabilities to implement effective practices and foster and sustain a culture of patient safety.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.
Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success. PMID:24892046
Dickinson, Christopher A.; Zelinsky, Gregory J.
2013-01-01
Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers’ oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d′ values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. PMID:23372555
NASA Astrophysics Data System (ADS)
Wibowo, Y. T.; Baskoro, S. Y.; Manurung, V. A. T.
2018-02-01
Plastic based products spread all over the world in many aspects of life. The ability to substitute other materials is getting stronger and wider. The use of plastic materials increases and become unavoidable. Plastic based mass production requires injection process as well Mold. The milling process of plastic mold steel material was done using HSS End Mill cutting tool that is widely used in a small and medium enterprise for the reason of its ability to be re sharpened and relatively inexpensive. Study on the effect of the geometry tool states that it has an important effect on the quality improvement. Cutting speed, feed rate, depth of cut and radii are input parameters beside to the tool path strategy. This paper aims to investigate input parameter and cutting tools behaviors within some different tool path strategy. For the reason of experiments efficiency Taguchi method and ANOVA were used. Response studied is surface roughness and cutting behaviors. By achieving the expected quality, no more additional process is required. Finally, the optimal combination of machining parameters will deliver the expected roughness and of course totally reduced cutting time. However actually, SMEs do not optimally use this data for cost reduction.
Comparison of optimization algorithms for the slow shot phase in HPDC
NASA Astrophysics Data System (ADS)
Frings, Markus; Berkels, Benjamin; Behr, Marek; Elgeti, Stefanie
2018-05-01
High-pressure die casting (HPDC) is a popular manufacturing process for aluminum processing. The slow shot phase in HPDC is the first phase of this process. During this phase, the molten metal is pushed towards the cavity under moderate plunger movement. The so-called shot curve describes this plunger movement. A good design of the shot curve is important to produce high-quality cast parts. Three partially competing process goals characterize the slow shot phase: (1) reducing air entrapment, (2) avoiding temperature loss, and (3) minimizing oxide caused by the air-aluminum contact. Due to the rough process conditions with high pressure and temperature, it is hard to design the shot curve experimentally. There exist a few design rules that are based on theoretical considerations. Nevertheless, the quality of the shot curve design still depends on the experience of the machine operator. To improve the shot curve it seems to be natural to use numerical optimization. This work compares different optimization strategies for the slow shot phase optimization. The aim is to find the best optimization approach on a simple test problem.
Su, Weixing; Chen, Hanning; Liu, Fang; Lin, Na; Jing, Shikai; Liang, Xiaodan; Liu, Wei
2017-03-01
There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell's pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
Haq, Ikram Ul; Akram, Fatima
2017-09-01
Commonly, unintentional induction and inadvertently preparing medium for engineered Escherichia coli BL21 CodonPlus (DE3)-RIPL, give poor or variable yields of heterologous proteins. Therefore, to enhance the activity and production of an industrially relevant recombinant processive endo-1,4-β-glucanase (CenC) propagated in Escherichia coli BL21 CodonPlus(DE3)-RIPL through various cultivation and induction strategies. Investigation of various growth media and induction parameters revealed that high-cell-density and optimal CenC expression were obtained in ZYBM9 medium induced either with 0.5 mM IPTG/150 mM lactose, after 6 h induction at 37 °C; and before induction, bacterial cells were given heat shock (42 °C) for 1 h when culture density (OD 600nm ) reached at 0.6. Intracellular enzyme activity was enhanced by 6.67 and 3.20-fold in ZYBM9 and 3×ZYBM9 medium, respectively, under optimal conditions. Using YNG auto-induction medium, activity was 2.5-fold increased after 10 h incubation at 37 °C. Approximately similar results were obtained by transferring the optimized process at the bioreactor level. Results showed that the effective process strategy is essential to enhance recombinant bacterial cell mass and enzyme production from small to large-scale. To the best of our knowledge, this is the first ever report on enhanced production of thermostable processive endo-1,4-β-glucanase cloned from Ruminiclostridium thermocellum, which is a suitable candidate for industrial applications. Graphical Abstract Flow Chart Summary of Enhanced Production of a Recombinant Multidomain Thermostable GH9 Processive Endo-1,4-β-glucanase from Ruminiclostridium thermocellum.
Heuristic-based information acquisition and decision making among pilots.
Wiggins, Mark W; Bollwerk, Sandra
2006-01-01
This research was designed to examine the impact of heuristic-based approaches to the acquisition of task-related information on the selection of an optimal alternative during simulated in-flight decision making. The work integrated features of naturalistic and normative decision making and strategies of information acquisition within a computer-based, decision support framework. The study comprised two phases, the first of which involved familiarizing pilots with three different heuristic-based strategies of information acquisition: frequency, elimination by aspects, and majority of confirming decisions. The second stage enabled participants to choose one of the three strategies of information acquisition to resolve a fourth (choice) scenario. The results indicated that task-oriented experience, rather than the information acquisition strategies, predicted the selection of the optimal alternative. It was also evident that of the three strategies available, the elimination by aspects information acquisition strategy was preferred by most participants. It was concluded that task-oriented experience, rather than the process of information acquisition, predicted task accuracy during the decision-making task. It was also concluded that pilots have a preference for one particular approach to information acquisition. Applications of outcomes of this research include the development of decision support systems that adapt to the information-processing capabilities and preferences of users.
A new real-time guidance strategy for aerodynamic ascent flight
NASA Astrophysics Data System (ADS)
Yamamoto, Takayuki; Kawaguchi, Jun'ichiro
2007-12-01
Reusable launch vehicles are conceived to constitute the future space transportation system. If these vehicles use air-breathing propulsion and lift taking-off horizontally, the optimal steering for these vehicles exhibits completely different behavior from that in conventional rockets flight. In this paper, the new guidance strategy is proposed. This method derives from the optimality condition as for steering and an analysis concludes that the steering function takes the form comprised of Linear and Logarithmic terms, which include only four parameters. The parameter optimization of this method shows the acquired terminal horizontal velocity is almost same with that obtained by the direct numerical optimization. This supports the parameterized Liner Logarithmic steering law. And here is shown that there exists a simple linear relation between the terminal states and the parameters to be corrected. The relation easily makes the parameters determined to satisfy the terminal boundary conditions in real-time. The paper presents the guidance results for the practical application cases. The results show the guidance is well performed and satisfies the terminal boundary conditions specified. The strategy built and presented here does guarantee the robust solution in real-time excluding any optimization process, and it is found quite practical.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
NASA Astrophysics Data System (ADS)
Gao, Kaizhou; Wang, Ling; Luo, Jianping; Jiang, Hua; Sadollah, Ali; Pan, Quanke
2018-06-01
In this article, scheduling and rescheduling problems with increasing processing time and new job insertion are studied for reprocessing problems in the remanufacturing process. To handle the unpredictability of reprocessing time, an experience-based strategy is used. Rescheduling strategies are applied for considering the effect of increasing reprocessing time and the new subassembly insertion. To optimize the scheduling and rescheduling objective, a discrete harmony search (DHS) algorithm is proposed. To speed up the convergence rate, a local search method is designed. The DHS is applied to two real-life cases for minimizing the maximum completion time and the mean of earliness and tardiness (E/T). These two objectives are also considered together as a bi-objective problem. Computational optimization results and comparisons show that the proposed DHS is able to solve the scheduling and rescheduling problems effectively and productively. Using the proposed approach, satisfactory optimization results can be achieved for scheduling and rescheduling on a real-life shop floor.
NASA Astrophysics Data System (ADS)
Chen, CHAI; Yiik Diew, WONG
2017-02-01
This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.
Optimal trading strategies—a time series approach
NASA Astrophysics Data System (ADS)
Bebbington, Peter A.; Kühn, Reimer
2016-05-01
Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Fuel consumption optimization for smart hybrid electric vehicle during a car-following process
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Xiangyu; Song, Jian
2017-03-01
Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.
2014-01-01
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...
2014-11-18
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
Smith, Aaron Douglas; Lockman, Nur Ain; Holtzapple, Mark T
2011-06-01
Nutrients are essential for microbial growth and metabolism in mixed-culture acid fermentations. Understanding the influence of nutrient feeding strategies on fermentation performance is necessary for optimization. For a four-bottle fermentation train, five nutrient contacting patterns (single-point nutrient addition to fermentors F1, F2, F3, and F4 and multi-point parallel addition) were investigated. Compared to the traditional nutrient contacting method (all nutrients fed to F1), the near-optimal feeding strategies improved exit yield, culture yield, process yield, exit acetate-equivalent yield, conversion, and total acid productivity by approximately 31%, 39%, 46%, 31%, 100%, and 19%, respectively. There was no statistical improvement in total acid concentration. The traditional nutrient feeding strategy had the highest selectivity and acetate-equivalent selectivity. Total acid productivity depends on carbon-nitrogen ratio.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
Kim, Yong Bok; Lee, Hyeongjin; Kim, Geun Hyung
2016-11-30
Recently, a three-dimensional (3D) bioprinting process for obtaining a cell-laden structure has been widely applied because of its ability to fabricate biomimetic complex structures embedded with and without cells. To successfully obtain a cell-laden porous block, the cell-delivering vehicle, bioink, is one of the significant factors. Until now, various biocompatible hydrogels (synthetic and natural biopolymers) have been utilized in the cell-printing process, but a bioink satisfying both biocompatibility and print-ability requirements to achieve a porous structure with reasonable mechanical strength has not been issued. Here, we propose a printing strategy with optimal conditions including a safe cross-linking procedure for obtaining a 3D porous cell block composed of a biocompatible collagen-bioink and genipin, a cross-linking agent. To obtain the optimal processing conditions, we modified the 3D printing machine and selected an optimal cross-linking condition (∼1 mM and 1 h) of genipin solution. To show the feasibility of the process, 3D pore-interconnected cell-laden constructs were manufactured using osteoblast-like cells (MG63) and human adipose stem cells (hASCs). Under these processing conditions, a macroscale 3D collagen-based cell block of 21 × 21 × 12 mm 3 and over 95% cell viability was obtained. In vitro biological testing of the cell-laden 3D porous structure showed that the embedded cells were sufficiently viable, and their proliferation was significantly higher; the cells also exhibited increased osteogenic activities compared to the conventional alginate-based bioink (control). The results indicated the fabrication process using the collagen-bioink would be an innovative platform to design highly biocompatible and mechanically stable cell blocks.
Strategies and trajectories of coral reef fish larvae optimizing self-recruitment.
Irisson, Jean-Olivier; LeVan, Anselme; De Lara, Michel; Planes, Serge
2004-03-21
Like many marine organisms, most coral reef fishes have a dispersive larval phase. The fate of this phase is of great concern for their ecology as it may determine population demography and connectivity. As direct study of the larval phase is difficult, we tackle the question of dispersion from an opposite point of view and study self-recruitment. In this paper, we propose a mathematical model of the pelagic phase, parameterized by a limited number of factors (currents, predator and prey distributions, energy budgets) and which focuses on the behavioral response of the larvae to these factors. We evaluate optimal behavioral strategies of the larvae (i.e. strategies that maximize the probability of return to the natal reef) and examine the trajectories of dispersal that they induce. Mathematically, larval behavior is described by a controlled Markov process. A strategy induces a sequence, indexed by time steps, of "decisions" (e.g. looking for food, swimming in a given direction). Biological, physical and topographic constraints are captured through the transition probabilities and the sets of possible decisions. Optimal strategies are found by means of the so-called stochastic dynamic programming equation. A computer program is developed and optimal decisions and trajectories are numerically derived. We conclude that this technique can be considered as a good tool to represent plausible larval behaviors and that it has great potential in terms of theoretical investigations and also for field applications.
The relation between cognitive and metacognitive strategic processing during a science simulation.
Dinsmore, Daniel L; Zoellner, Brian P
2018-03-01
This investigation was designed to uncover the relations between students' cognitive and metacognitive strategies used during a complex climate simulation. While cognitive strategy use during science inquiry has been studied, the factors related to this strategy use, such as concurrent metacognition, prior knowledge, and prior interest, have not been investigated in a multidimensional fashion. This study addressed current issues in strategy research by examining not only how metacognitive, surface-level, and deep-level strategies influence performance, but also how these strategies related to each other during a contextually relevant science simulation. The sample for this study consisted of 70 undergraduates from a mid-sized Southeastern university in the United States. These participants were recruited from both physical and life science (e.g., biology) and education majors to obtain a sample with variance in terms of their prior knowledge, interest, and strategy use. Participants completed measures of prior knowledge and interest about global climate change. Then, they were asked to engage in an online climate simulator for up to 30 min while thinking aloud. Finally, participants were asked to answer three outcome questions about global climate change. Results indicated a poor fit for the statistical model of the frequency and level of processing predicting performance. However, a statistical model that independently examined the influence of metacognitive monitoring and control of cognitive strategies showed a very strong relation between the metacognitive and cognitive strategies. Finally, smallest space analysis results provided evidence that strategy use may be better captured in a multidimensional fashion, particularly with attention paid towards the combination of strategies employed. Conclusions drawn from the evidence point to the need for more dynamic, multidimensional models of strategic processing that account for the patterns of optimal and non-optimal strategy use. Additionally, analyses that can capture these complex patterns need to be further explored. © 2017 The British Psychological Society.
Mehrian, Mohammad; Guyot, Yann; Papantoniou, Ioannis; Olofsson, Simon; Sonnaert, Maarten; Misener, Ruth; Geris, Liesbet
2018-03-01
In regenerative medicine, computer models describing bioreactor processes can assist in designing optimal process conditions leading to robust and economically viable products. In this study, we started from a (3D) mechanistic model describing the growth of neotissue, comprised of cells, and extracellular matrix, in a perfusion bioreactor set-up influenced by the scaffold geometry, flow-induced shear stress, and a number of metabolic factors. Subsequently, we applied model reduction by reformulating the problem from a set of partial differential equations into a set of ordinary differential equations. Comparing the reduced model results to the mechanistic model results and to dedicated experimental results assesses the reduction step quality. The obtained homogenized model is 10 5 fold faster than the 3D version, allowing the application of rigorous optimization techniques. Bayesian optimization was applied to find the medium refreshment regime in terms of frequency and percentage of medium replaced that would maximize neotissue growth kinetics during 21 days of culture. The simulation results indicated that maximum neotissue growth will occur for a high frequency and medium replacement percentage, a finding that is corroborated by reports in the literature. This study demonstrates an in silico strategy for bioprocess optimization paying particular attention to the reduction of the associated computational cost. © 2017 Wiley Periodicals, Inc.
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
Cox, G; Beresford, N A; Alvarez-Farizo, B; Oughton, D; Kis, Z; Eged, K; Thørring, H; Hunt, J; Wright, S; Barnett, C L; Gil, J M; Howard, B J; Crout, N M J
2005-01-01
A spatially implemented model designed to assist the identification of optimal countermeasure strategies for radioactively contaminated regions is described. Collective and individual ingestion doses for people within the affected area are estimated together with collective exported ingestion dose. A range of countermeasures are incorporated within the model, and environmental restrictions have been included as appropriate. The model evaluates the effectiveness of a given combination of countermeasures through a cost function which balances the benefit obtained through the reduction in dose with the cost of implementation. The optimal countermeasure strategy is the combination of individual countermeasures (and when and where they are implemented) which gives the lowest value of the cost function. The model outputs should not be considered as definitive solutions, rather as interactive inputs to the decision making process. As a demonstration the model has been applied to a hypothetical scenario in Cumbria (UK). This scenario considered a published nuclear power plant accident scenario with a total deposition of 1.7x10(14), 1.2x10(13), 2.8x10(10) and 5.3x10(9)Bq for Cs-137, Sr-90, Pu-239/240 and Am-241, respectively. The model predicts that if no remediation measures were implemented the resulting collective dose would be approximately 36 000 person-Sv (predominantly from 137Cs) over a 10-year period post-deposition. The optimal countermeasure strategy is predicted to avert approximately 33 000 person-Sv at a cost of approximately 160 million pounds. The optimal strategy comprises a mixture of ploughing, AFCF (ammonium-ferric hexacyano-ferrate) administration, potassium fertiliser application, clean feeding of livestock and food restrictions. The model recommends specific areas within the contaminated area and time periods where these measures should be implemented.
ERIC Educational Resources Information Center
Lee, Courtland C.
This series of five interrelated modules is an update and revision of "Saving the Native Son: Empowerment Strategies for Young Black Males (1996)." It offers specific strategies for empowering young African American males to help them achieve optimal educational and social success. Empowerment is a developmental process by which people who are…
Reentry trajectory optimization based on a multistage pseudospectral method.
Zhao, Jiang; Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.
Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method
Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929
Stephanie A. Snyder; Keith D. Stockmann; Gaylord E. Morris
2012-01-01
The US Forest Service used contracted helicopter services as part of its wildfire suppression strategy. An optimization decision-modeling system was developed to assist in the contract selection process. Three contract award selection criteria were considered: cost per pound of delivered water, total contract cost, and quality ratings of the aircraft and vendors....
Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy
NASA Astrophysics Data System (ADS)
Minh, David D. L.; Adib, Artur B.
2008-05-01
An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.
Experimental analysis and modeling of melt growth processes
NASA Astrophysics Data System (ADS)
Müller, Georg
2002-04-01
Melt growth processes provide the basic crystalline materials for many applications. The research and development of crystal growth processes is therefore driven by the demands which arise from these specific applications; however, common goals include an increased uniformity of the relevant crystal properties at the micro- and macro-scale, a decrease of deleterious crystal defects, and an increase of crystal dimensions. As melt growth equipment and experimentation becomes more and more expensive, little room remains for improvements by trial and error procedures. A more successful strategy is to optimize the crystal growth process by a combined use of experimental process analysis and computer modeling. This will be demonstrated in this paper by several examples from the bulk growth of silicon, gallium arsenide, indium phosphide, and calcium fluoride. These examples also involve the most important melt growth techniques, crystal pulling (Czochralski methods) and vertical gradient freeze (Bridgman-type methods). The power and success of the above optimization strategy, however, is not limited only to the given examples but can be generalized and applied to many types of bulk crystal growth.
Succession of hide–seek and pursuit–evasion at heterogeneous locations
Gal, Shmuel; Casas, Jérôme
2014-01-01
Many interactions between searching agents and their elusive targets are composed of a succession of steps, whether in the context of immune systems, predation or counterterrorism. In the simplest case, a two-step process starts with a search-and-hide phase, also called a hide-and-seek phase, followed by a round of pursuit–escape. Our aim is to link these two processes, usually analysed separately and with different models, in a single game theory context. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hide strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses it's hiding place so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal position for the hider is where it has the highest probability of escaping once spotted. PMID:24621817
USDA-ARS?s Scientific Manuscript database
Plants photosynthesis-related traits are often co-regulated to capture light and CO2 to optimize the rate of CO2 fixation (A) via photo-biochemical processes. However, potassium (K) limitations and adaptations strategies of photosynthetic processes across CO2 levels are not well understood. To evalu...
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
Performance Optimization Control of ECH using Fuzzy Inference Application
NASA Astrophysics Data System (ADS)
Dubey, Abhay Kumar
Electro-chemical honing (ECH) is a hybrid electrolytic precision micro-finishing technology that, by combining physico-chemical actions of electro-chemical machining and conventional honing processes, provides the controlled functional surfaces-generation and fast material removal capabilities in a single operation. Process multi-performance optimization has become vital for utilizing full potential of manufacturing processes to meet the challenging requirements being placed on the surface quality, size, tolerances and production rate of engineering components in this globally competitive scenario. This paper presents an strategy that integrates the Taguchi matrix experimental design, analysis of variances and fuzzy inference system (FIS) to formulate a robust practical multi-performance optimization methodology for complex manufacturing processes like ECH, which involve several control variables. Two methodologies one using a genetic algorithm tuning of FIS (GA-tuned FIS) and another using an adaptive network based fuzzy inference system (ANFIS) have been evaluated for a multi-performance optimization case study of ECH. The actual experimental results confirm their potential for a wide range of machining conditions employed in ECH.
Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica
2016-09-01
Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
The Effects of Pre-processing Strategies for Pediatric Cochlear Implant Recipients
Rakszawski, Bernadette; Wright, Rose; Cadieux, Jamie H.; Davidson, Lisa S.; Brenner, Christine
2016-01-01
Background Cochlear implants (CIs) have been shown to improve children’s speech recognition over traditional amplification when severe to profound sensorineural hearing loss is present. Despite improvements, understanding speech at low-level intensities or in the presence of background noise remains difficult. In an effort to improve speech understanding in challenging environments, Cochlear Ltd. offers pre-processing strategies that apply various algorithms prior to mapping the signal to the internal array. Two of these strategies include Autosensitivity Control™ (ASC) and Adaptive Dynamic Range Optimization (ADRO®). Based on previous research, the manufacturer’s default pre-processing strategy for pediatrics’ everyday programs combines ASC+ADRO®. Purpose The purpose of this study is to compare pediatric speech perception performance across various pre-processing strategies while applying a specific programming protocol utilizing increased threshold (T) levels to ensure access to very low-level sounds. Research Design This was a prospective, cross-sectional, observational study. Participants completed speech perception tasks in four pre-processing conditions: no pre-processing, ADRO®, ASC, ASC+ADRO®. Study Sample Eleven pediatric Cochlear Ltd. cochlear implant users were recruited: six bilateral, one unilateral, and four bimodal. Intervention Four programs, with the participants’ everyday map, were loaded into the processor with different pre-processing strategies applied in each of the four positions: no pre-processing, ADRO®, ASC, and ASC+ADRO®. Data Collection and Analysis Participants repeated CNC words presented at 50 and 70 dB SPL in quiet and HINT sentences presented adaptively with competing R-Space noise at 60 and 70 dB SPL. Each measure was completed as participants listened with each of the four pre-processing strategies listed above. Test order and condition were randomized. A repeated-measures analysis of variance (ANOVA) was used to compare each pre-processing strategy across group data. Critical differences were utilized to determine significant score differences between each pre-processing strategy for individual participants. Results For CNC words presented at 50 dB SPL, the group data revealed significantly better scores using ASC+ADRO® compared to all other pre-processing conditions while ASC resulted in poorer scores compared to ADRO® and ASC+ADRO®. Group data for HINT sentences presented in 70 dB SPL of R-Space noise revealed significantly improved scores using ASC and ASC+ADRO® compared to no pre-processing, with ASC+ADRO® scores being better than ADRO® alone scores. Group data for CNC words presented at 70 dB SPL and adaptive HINT sentences presented in 60 dB SPL of R-Space noise showed no significant difference among conditions. Individual data showed that the pre-processing strategy yielding the best scores varied across measures and participants. Conclusions Group data reveals an advantage with ASC+ADRO® for speech perception presented at lower levels and in higher levels of background noise. Individual data revealed that the optimal pre-processing strategy varied among participants; indicating that a variety of pre-processing strategies should be explored for each CI user considering his or her performance in challenging listening environments. PMID:26905529
Optimal cure cycle design of a resin-fiber composite laminate
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Sheen, Jeenson
1987-01-01
A unified computed aided design method was studied for the cure cycle design that incorporates an optimal design technique with the analytical model of a composite cure process. The preliminary results of using this proposed method for optimal cure cycle design are reported and discussed. The cure process of interest is the compression molding of a polyester which is described by a diffusion reaction system. The finite element method is employed to convert the initial boundary value problem into a set of first order differential equations which are solved simultaneously by the DE program. The equations for thermal design sensitivities are derived by using the direct differentiation method and are solved by the DE program. A recursive quadratic programming algorithm with an active set strategy called a linearization method is used to optimally design the cure cycle, subjected to the given design performance requirements. The difficulty of casting the cure cycle design process into a proper mathematical form is recognized. Various optimal design problems are formulated to address theses aspects. The optimal solutions of these formulations are compared and discussed.
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Models for interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.
Optimal design of geodesically stiffened composite cylindrical shells
NASA Technical Reports Server (NTRS)
Gendron, G.; Guerdal, Z.
1992-01-01
An optimization system based on the finite element code Computations Structural Mechanics (CSM) Testbed and the optimization program, Automated Design Synthesis (ADS), is described. The optimization system can be used to obtain minimum-weight designs of composite stiffened structures. Ply thickness, ply orientations, and stiffener heights can be used as design variables. Buckling, displacement, and material failure constraints can be imposed on the design. The system is used to conduct a design study of geodesically stiffened shells. For comparison purposes, optimal designs of unstiffened shells and shells stiffened by rings and stingers are also obtained. Trends in the design of geodesically stiffened shells are identified. An approach to include local stress concentrations during the design optimization process is then presented. The method is based on a global/local analysis technique. It employs spline interpolation functions to determine displacements and rotations from a global model which are used as 'boundary conditions' for the local model. The organization of the strategy in the context of an optimization process is described. The method is validated with an example.
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Reactive power optimization strategy considering analytical impedance ratio
NASA Astrophysics Data System (ADS)
Wu, Zhongchao; Shen, Weibing; Liu, Jinming; Guo, Maoran; Zhang, Shoulin; Xu, Keqiang; Wang, Wanjun; Sui, Jinlong
2017-05-01
In this paper, considering the traditional reactive power optimization cannot realize the continuous voltage adjustment and voltage stability, a dynamic reactive power optimization strategy is proposed in order to achieve both the minimization of network loss and high voltage stability with wind power. Due to the fact that wind power generation is fluctuant and uncertain, electrical equipments such as transformers and shunt capacitors may be operated frequently in order to achieve minimization of network loss, which affect the lives of these devices. In order to solve this problem, this paper introduces the derivation process of analytical impedance ratio based on Thevenin equivalent. Thus, the multiple objective function is proposed to minimize the network loss and analytical impedance ratio. Finally, taking the improved IEEE 33-bus distribution system as example, the result shows that the movement of voltage control equipment has been reduced and network loss increment is controlled at the same time, which proves the applicable value of this strategy.
Küppers, Tobias; Steffen, Victoria; Hellmuth, Hendrik; O'Connell, Timothy; Bongaerts, Johannes; Maurer, Karl-Heinz; Wiechert, Wolfgang
2014-03-24
Since volatile and rising cost factors such as energy, raw materials and market competitiveness have a significant impact on the economic efficiency of biotechnological bulk productions, industrial processes need to be steadily improved and optimized. Thereby the current production hosts can undergo various limitations. To overcome those limitations and in addition increase the diversity of available production hosts for future applications, we suggest a Production Strain Blueprinting (PSB) strategy to develop new production systems in a reduced time lapse in contrast to a development from scratch.To demonstrate this approach, Bacillus pumilus has been developed as an alternative expression platform for the production of alkaline enzymes in reference to the established industrial production host Bacillus licheniformis. To develop the selected B. pumilus as an alternative production host the suggested PSB strategy was applied proceeding in the following steps (dedicated product titers are scaled to the protease titer of Henkel's industrial production strain B. licheniformis at lab scale): Introduction of a protease production plasmid, adaptation of a protease production process (44%), process optimization (92%) and expression optimization (114%). To further evaluate the production capability of the developed B. pumilus platform, the target protease was substituted by an α-amylase. The expression performance was tested under the previously optimized protease process conditions and under subsequently adapted process conditions resulting in a maximum product titer of 65% in reference to B. licheniformis protease titer. In this contribution the applied PSB strategy performed very well for the development of B. pumilus as an alternative production strain. Thereby the engineered B. pumilus expression platform even exceeded the protease titer of the industrial production host B. licheniformis by 14%. This result exhibits a remarkable potential of B. pumilus to be the basis for a next generation production host, since the strain has still a large potential for further genetic engineering. The final amylase titer of 65% in reference to B. licheniformis protease titer suggests that the developed B. pumilus expression platform is also suitable for an efficient production of non-proteolytic enzymes reaching a final titer of several grams per liter without complex process modifications.
Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn
In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less
Reservoir management strategy for East Randolph Field, Randolph Township, Portage County, Ohio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safley, L.E.; Salamy, S.P.; Young, M.A.
1998-07-01
The primary objective of the Reservoir Management Field Demonstration Program is to demonstrate that multidisciplinary reservoir management teams using appropriate software and methodologies with efforts scaled to the size of the resource are a cost-effective method for: Increasing current profitability of field operations; Forestalling abandonment of the reservoir; and Improving long-term economic recovery for the company. The primary objective of the Reservoir Management Demonstration Project with Belden and Blake Corporation is to develop a comprehensive reservoir management strategy to improve the operational economics and optimize oil production from East Randolph field, Randolph Township, Portage County, Ohio. This strategy identifies themore » viable improved recovery process options and defines related operational and facility requirements. In addition, strategies are addressed for field operation problems, such as paraffin buildup, hydraulic fracture stimulation, pumping system optimization, and production treatment requirements, with the goal of reducing operating costs and improving oil recovery.« less
Non-rigid Reconstruction of Casting Process with Temperature Feature
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu
2017-09-01
Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.
An adaptive approach to the physical annealing strategy for simulated annealing
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2013-02-01
A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.
Treatment strategies for acute metabolic disorders in neonates
2011-01-01
Acute metabolic emergencies in neonates represent a challenge to the medical and nursing staff. If not treated optimally, these disorders are associated with poor outcome. Early diagnosis, supportive therapy and specific measures addressing the derranged metabolic process are the gold standards for favorable results. This review highlights treatment strategies for Inborn Errors of Metabolism (IEM) presenting in the neonatal period. PMID:27493313
NASA Astrophysics Data System (ADS)
Cody, Brent M.; Baù, Domenico; González-Nicolás, Ana
2015-09-01
Geological carbon sequestration (GCS) has been identified as having the potential to reduce increasing atmospheric concentrations of carbon dioxide (CO2). However, a global impact will only be achieved if GCS is cost-effectively and safely implemented on a massive scale. This work presents a computationally efficient methodology for identifying optimal injection strategies at candidate GCS sites having uncertainty associated with caprock permeability, effective compressibility, and aquifer permeability. A multi-objective evolutionary optimization algorithm is used to heuristically determine non-dominated solutions between the following two competing objectives: (1) maximize mass of CO2 sequestered and (2) minimize project cost. A semi-analytical algorithm is used to estimate CO2 leakage mass rather than a numerical model, enabling the study of GCS sites having vastly different domain characteristics. The stochastic optimization framework presented herein is applied to a feasibility study of GCS in a brine aquifer in the Michigan Basin (MB), USA. Eight optimization test cases are performed to investigate the impact of decision-maker (DM) preferences on Pareto-optimal objective-function values and carbon-injection strategies. This analysis shows that the feasibility of GCS at the MB test site is highly dependent upon the DM's risk-adversity preference and degree of uncertainty associated with caprock integrity. Finally, large gains in computational efficiency achieved using parallel processing and archiving are discussed.
NASA Technical Reports Server (NTRS)
Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.
Fredriksson, Mattias J; Petersson, Patrik; Axelsson, Bengt-Olof; Bylund, Dan
2011-10-17
A strategy for rapid optimization of liquid chromatography column temperature and gradient shape is presented. The optimization as such is based on the well established retention and peak width models implemented in software like e.g. DryLab and LC simulator. The novel part of the strategy is a highly automated processing algorithm for detection and tracking of chromatographic peaks in noisy liquid chromatography-mass spectrometry (LC-MS) data. The strategy is presented and visualized by the optimization of the separation of two degradants present in ultraviolet (UV) exposed fluocinolone acetonide. It should be stressed, however, that it can be utilized for LC-MS analysis of any sample and application where several runs are conducted on the same sample. In the application presented, 30 components that were difficult or impossible to detect in the UV data could be automatically detected and tracked in the MS data by using the proposed strategy. The number of correctly tracked components was above 95%. Using the parameters from the reconstructed data sets to the model gave good agreement between predicted and observed retention times at optimal conditions. The area of the smallest tracked component was estimated to 0.08% compared to the main component, a level relevant for the characterization of impurities in the pharmaceutical industry. Copyright © 2011 Elsevier B.V. All rights reserved.
Trends in Process Analytical Technology: Present State in Bioprocessing.
Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian
2017-08-04
Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.
An intelligent factory-wide optimal operation system for continuous production process
NASA Astrophysics Data System (ADS)
Ding, Jinliang; Chai, Tianyou; Wang, Hongfeng; Wang, Junwei; Zheng, Xiuping
2016-03-01
In this study, a novel intelligent factory-wide operation system for a continuous production process is designed to optimise the entire production process, which consists of multiple units; furthermore, this system is developed using process operational data to avoid the complexity of mathematical modelling of the continuous production process. The data-driven approach aims to specify the structure of the optimal operation system; in particular, the operational data of the process are used to formulate each part of the system. In this context, the domain knowledge of process engineers is utilised, and a closed-loop dynamic optimisation strategy, which combines feedback, performance prediction, feed-forward, and dynamic tuning schemes into a framework, is employed. The effectiveness of the proposed system has been verified using industrial experimental results.
Optimizing nursing care by integrating theory-driven evidence-based practice.
Pipe, Teri Britt
2007-01-01
An emerging challenge for nursing leadership is how to convey the importance of both evidence-based practice (EBP) and theory-driven care in ensuring patient safety and optimizing outcomes. This article describes a specific example of a leadership strategy based on Rosswurm and Larrabee's model for change to EBP, which was effective in aligning the processes of EBP and theory-driven care.
Integrated multidisciplinary optimization of rotorcraft: A plan for development
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Editor); Mantay, Wayne R. (Editor)
1989-01-01
This paper describes a joint NASA/Army initiative at the Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for important interactions among the disciplines. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. Additionally, some of the analysis aspects are discussed, validation strategies are described, and an initial attempt at defining the interdisciplinary couplings is summarized. At this writing, significant progress has been made, principally in the areas of single discipline optimization. Accomplishments are described in areas of rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, and rotor structural optimization for minimum weight.
Organizational Factors and the Cancer Screening Process
Zapka, Jane; Edwards, Heather; Taplin, Stephen H.
2010-01-01
Cancer screening is a process of care consisting of several steps and interfaces. This article reviews what is known about the association between organizational factors and cancer screening rates and examines how organizational strategies can address the steps and interfaces of cancer screening in the context of both intraorganizational and interorganizational processes. We reviewed 79 studies assessing the relationship between organizational factors and cancer screening. Screening rates are largely driven by strategies to 1) limit the number of interfaces across organizational boundaries; 2) recruit patients, promote referrals, and facilitate appointment scheduling; and 3) promote continuous patient care. Optimal screening rates can be achieved when health-care organizations tailor strategies to the steps and interfaces in the cancer screening process that are most critical for their organizations, the providers who work within them, and the patients they serve. PMID:20386053
Organizational factors and the cancer screening process.
Anhang Price, Rebecca; Zapka, Jane; Edwards, Heather; Taplin, Stephen H
2010-01-01
Cancer screening is a process of care consisting of several steps and interfaces. This article reviews what is known about the association between organizational factors and cancer screening rates and examines how organizational strategies can address the steps and interfaces of cancer screening in the context of both intraorganizational and interorganizational processes. We reviewed 79 studies assessing the relationship between organizational factors and cancer screening. Screening rates are largely driven by strategies to 1) limit the number of interfaces across organizational boundaries; 2) recruit patients, promote referrals, and facilitate appointment scheduling; and 3) promote continuous patient care. Optimal screening rates can be achieved when health-care organizations tailor strategies to the steps and interfaces in the cancer screening process that are most critical for their organizations, the providers who work within them, and the patients they serve.
NASA Astrophysics Data System (ADS)
Wei, J.; Wang, G.; Liu, R.
2008-12-01
The Tarim River Basin is the longest inland river in China. Due to water scarcity, ecologically-fragile is becoming a significant constraint to sustainable development in this region. To effectively manage the limited water resources for ecological purposes and for conventional water utilization purposes, a real-time water resources allocation Decision Support System (DSS) has been developed. Based on workflows of the water resources regulations and comprehensive analysis of the efficiency and feasibility of water management strategies, the DSS includes information systems that perform data acquisition, management and visualization, and model systems that perform hydrological forecast, water demand prediction, flow routing simulation and water resources optimization of the hydrological and water utilization process. An optimization and process control strategy is employed to dynamically allocate the water resources among the different stakeholders. The competitive targets and constraints are taken into considered by multi-objective optimization and with different priorities. The DSS of the Tarim River Basin has been developed and been successfully utilized to support the water resources management of the Tarim River Basin since 2005.
2014-01-01
Background In Pichia pastoris bioprocess engineering, classic approaches for clone selection and bioprocess optimization at small/micro scale using the promoter of the alcohol oxidase 1 gene (PAOX1), induced by methanol, present low reproducibility leading to high time and resource consumption. Results An automated microfermentation platform (RoboLector) was successfully tested to overcome the chronic problems of clone selection and optimization of fed-batch strategies. Different clones from Mut+P. pastoris phenotype strains expressing heterologous Rhizopus oryzae lipase (ROL), including a subset also overexpressing the transcription factor HAC1, were tested to select the most promising clones. The RoboLector showed high performance for the selection and optimization of cultivation media with minimal cost and time. Syn6 medium was better than conventional YNB medium in terms of production of heterologous protein. The RoboLector microbioreactor was also tested for different fed-batch strategies with three clones producing different lipase levels. Two mixed substrates fed-batch strategies were evaluated. The first strategy was the enzymatic release of glucose from a soluble glucose polymer by a glucosidase, and methanol addition every 24 hours. The second strategy used glycerol as co-substrate jointly with methanol at two different feeding rates. The implementation of these simple fed-batch strategies increased the levels of lipolytic activity 80-fold compared to classical batch strategies used in clone selection. Thus, these strategies minimize the risk of errors in the clone selection and increase the detection level of the desired product. Finally, the performance of two fed-batch strategies was compared for lipase production between the RoboLector microbioreactor and 5 liter stirred tank bioreactor for three selected clones. In both scales, the same clone ranking was achieved. Conclusion The RoboLector showed excellent performance in clone selection of P. pastoris Mut+ phenotype. The use of fed-batch strategies using mixed substrate feeds resulted in increased biomass and lipolytic activity. The automated processing of fed-batch strategies by the RoboLector considerably facilitates the operation of fermentation processes, while reducing error-prone clone selection by increasing product titers. The scale-up from microbioreactor to lab scale stirred tank bioreactor showed an excellent correlation, validating the use of microbioreactor as a powerful tool for evaluating fed-batch operational strategies. PMID:24606982
Assessment of combating-desertification strategies using the linear assignment method
NASA Astrophysics Data System (ADS)
Hassan Sadeghravesh, Mohammad; Khosravi, Hassan; Ghasemian, Soudeh
2016-04-01
Nowadays desertification, as a global problem, affects many countries in the world, especially developing countries like Iran. With respect to increasing importance of desertification and its complexity, the necessity of attention to the optimal combating-desertification alternatives is essential. Selecting appropriate strategies according to all effective criteria to combat the desertification process can be useful in rehabilitating degraded lands and avoiding degradation in vulnerable fields. This study provides systematic and optimal strategies of combating desertification by use of a group decision-making model. To this end, the preferences of indexes were obtained through using the Delphi model, within the framework of multi-attribute decision making (MADM). Then, priorities of strategies were evaluated by using linear assignment (LA) method. According to the results, the strategies to prevent improper change of land use (A18), development and reclamation of plant cover (A23), and control overcharging of groundwater resources (A31) were identified as the most important strategies for combating desertification in this study area. Therefore, it is suggested that the aforementioned ranking results be considered in projects which control and reduce the effects of desertification and rehabilitate degraded lands.
NASA Technical Reports Server (NTRS)
Spurlock, Paul; Spurlock, Jack M.; Evanich, Peggy L.
1991-01-01
An overview of recent developments in process-control technology which might have applications in future advanced life support systems for long-duration space operations is presented. Consideration is given to design criteria related to control system selection and optimization, and process-control interfacing methodology. Attention is also given to current life support system process control strategies, innovative sensors, instrumentation and control, and innovations in process supervision.
Making do with less: Must sparse data preclude informed harvest strategies for European waterbirds?
Johnson, Fred A.; Alhainen, Mikko; Fox, Anthony D.; Madsen, Jesper; Guillemain, Matthieu
2018-01-01
The demography of many European waterbirds is not well understood because most countries have conducted little monitoring and assessment, and coordination among countries on waterbird management has little precedent. Yet intergovernmental treaties now mandate the use of sustainable, adaptive harvest strategies, whose development is challenged by a paucity of demographic information. In this study, we explore how a combination of allometric relationships, fragmentary monitoring and research information, and expert judgment can be used to estimate the parameters of a theta-logistic population model, which in turn can be used in a Markov decision process to derive optimal harvesting strategies. We show how to account for considerable parametric uncertainty, as well as for different management objectives. We illustrate our methodology with a poorly understood population of taiga bean geese (Anser fabalis fabalis), which is a popular game bird in Fennoscandia. Our results for taiga bean geese suggest that they may have demographic rates similar to other, well-studied species of geese, and our model-based predictions of population size are consistent with the limited monitoring information available. Importantly, we found that by using a Markov decision process, a simple scalar population model may be sufficient to guide harvest management of this species, even if its demography is age-structured. Finally, we demonstrated how two different management objectives can lead to very different optimal harvesting strategies, and how conflicting objectives may be traded off with each other. This approach will have broad application for European waterbirds by providing preliminary estimates of key demographic parameters, by providing insights into the monitoring and research activities needed to corroborate those estimates, and by producing harvest management strategies that are optimal with respect to the managers’ objectives, options, and available demographic information.
Strategies for Fermentation Medium Optimization: An In-Depth Review
Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.
2017-01-01
Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Adapting to rates versus amounts of climate change: a case of adaptation to sea-level rise
NASA Astrophysics Data System (ADS)
Shayegh, Soheil; Moreno-Cruz, Juan; Caldeira, Ken
2016-10-01
Adaptation is the process of adjusting to climate change in order to moderate harm or exploit beneficial opportunities associated with it. Most adaptation strategies are designed to adjust to a new climate state. However, despite our best efforts to curtail greenhouse gas emissions, climate is likely to continue changing far into the future. Here, we show how considering rates of change affects the projected optimal adaptation strategy. We ground our discussion with an example of optimal investment in the face of continued sea-level rise, presenting a quantitative model that illustrates the interplay among physical and economic factors governing coastal development decisions such as rate of sea-level rise, land slope, discount rate, and depreciation rate. This model shows that the determination of optimal investment strategies depends on taking into account future rates of sea-level rise, as well as social and political constraints. This general approach also applies to the development of improved strategies to adapt to ongoing trends in temperature, precipitation, and other climate variables. Adaptation to some amount of change instead of adaptation to ongoing rates of change may produce inaccurate estimates of damages to the social systems and their ability to respond to external pressures.
Optimal technology investment strategies for a reusable launch vehicle
NASA Technical Reports Server (NTRS)
Moore, A. A.; Braun, R. D.; Powell, R. W.
1995-01-01
Within the present budgetary environment, developing the technology that leads to an operationally efficient space transportation system with the required performance is a challenge. The present research focuses on a methodology to determine high payoff technology investment strategies. Research has been conducted at Langley Research Center in which design codes for the conceptual analysis of space transportation systems have been integrated in a multidisciplinary design optimization approach. The current study integrates trajectory, propulsion, weights and sizing, and cost disciplines where the effect of technology maturation on the development cost of a single stage to orbit reusable launch vehicle is examined. Results show that the technology investment prior to full-scale development has a significant economic payoff. The design optimization process is used to determine strategic allocations of limited technology funding to maximize the economic payoff.
Grilo, António L; Mateus, Marília; Aires-Barros, Maria R; Azevedo, Ana M
2017-12-01
Monoclonal antibodies currently dominate the biopharmaceutical market with growing sales having reached 80 billion USD in 2016. As most top-selling mAbs are approaching the end of their patent life, biopharmaceutical companies compete fiercely in the biosimilars market. These two factors present a strong motivation for alternative process strategies and process optimization. In this work a novel purification strategy for monoclonal antibodies comprising phenylboronic acid multimodal chromatography for capture followed by polishing by ion-exchange monolithic chromatography and packed bed hydrophobic interaction chromatography is presented and compared to the traditional protein-A-based process. Although the capital investment is similar for both processes, the operation cost is 20% lower for the novel strategy. This study shows that the new process is worthwhile investing in and could present a viable alternative to the platform process used by most industrial players. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Structural optimization under overhang constraints imposed by additive manufacturing technologies
NASA Astrophysics Data System (ADS)
Allaire, G.; Dapogny, C.; Estevez, R.; Faure, A.; Michailidis, G.
2017-12-01
This article addresses one of the major constraints imposed by additive manufacturing processes on shape optimization problems - that of overhangs, i.e. large regions hanging over void without sufficient support from the lower structure. After revisiting the 'classical' geometric criteria used in the literature, based on the angle between the structural boundary and the build direction, we propose a new mechanical constraint functional, which mimics the layer by layer construction process featured by additive manufacturing technologies, and thereby appeals to the physical origin of the difficulties caused by overhangs. This constraint, as well as some variants, is precisely defined; their shape derivatives are computed in the sense of Hadamard's method, and numerical strategies are extensively discussed, in two and three space dimensions, to efficiently deal with the appearance of overhang features in the course of shape optimization processes.
MOI to TEI : a Mars Sample Return strategy
NASA Technical Reports Server (NTRS)
Smith, Chad W.; Maddock, Robert W.
2006-01-01
This paper describes the issues and challenges related to the design of the rendezvous between the Earth Return Vehicle (ERV) and the Orbiting Sample (OS) for the Mars Sample Return (MSR) mission. In particular, attention will be focused on the strategy for 'optimizing' the intermediate segment of the rendezvous process, during which there are a great number of variables that must be considered and well understood.
Differences in Multitask Resource Reallocation After Change in Task Values.
Matton, Nadine; Paubel, Pierre; Cegarra, Julien; Raufaste, Eric
2016-12-01
The objective was to characterize multitask resource reallocation strategies when managing subtasks with various assigned values. When solving a resource conflict in multitasking, Salvucci and Taatgen predict a globally rational strategy will be followed that favors the most urgent subtask and optimizes global performance. However, Katidioti and Taatgen identified a locally rational strategy that optimizes only a subcomponent of the whole task, leading to detrimental consequences on global performance. Moreover, the question remains open whether expertise would have an impact on the choice of the strategy. We adopted a multitask environment used for pilot selection with a change in emphasis on two out of four subtasks while all subtasks had to be maintained over a minimum performance. A laboratory eye-tracking study contrasted 20 recently selected pilot students considered as experienced with this task and 15 university students considered as novices. When two subtasks were emphasized, novices focused their resources particularly on one high-value subtask and failed to prevent both low-value subtasks falling below minimum performance. On the contrary, experienced people delayed the processing of one low-value subtask but managed to optimize global performance. In a multitasking environment where some subtasks are emphasized, novices follow a locally rational strategy whereas experienced participants follow a globally rational strategy. During complex training, trainees are only able to adjust their resource allocation strategy to subtask emphasis changes once they are familiar with the multitasking environment. © 2016, Human Factors and Ergonomics Society.
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
Patterning control strategies for minimum edge placement error in logic devices
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim
2017-03-01
In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.
Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan
2017-12-01
This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.
Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.
2000-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.
Optimism, coping and long-term recovery from coronary artery surgery in women.
King, K B; Rowe, M A; Kimble, L P; Zerwic, J J
1998-02-01
Optimism, coping strategies, and psychological and functional outcomes were measured in 55 women undergoing coronary artery surgery. Data were collected in-hospital and at 1, 6, and 12 months after surgery. Optimism was related to positive moods and life satisfaction, and inversely related to negative moods. Few relationships were found between optimism and functional ability. Cognitive coping strategies accounted for a mediating effect between optimism and negative mood. Optimists were more likely to accept their situation, and less likely to use escapism. In turn, these coping strategies were inversely related to negative mood and mediated the relationship between optimism and this outcome. Optimism was not related to problem-focused coping strategies; this, these coping strategies cannot explain the relationship between optimism and outcomes.
NASA Astrophysics Data System (ADS)
Gonzalez-Nicolas, A.; Cihan, A.; Birkholzer, J. T.; Petrusak, R.; Zhou, Q.; Riestenberg, D. E.; Trautz, R. C.; Godec, M.
2016-12-01
Industrial-scale injection of CO2 into the subsurface can cause reservoir pressure increases that must be properly controlled to prevent any potential environmental impact. Excessive pressure buildup in reservoir may result in ground water contamination stemming from leakage through conductive pathways, such as improperly plugged abandoned wells or distant faults, and the potential for fault reactivation and possibly seal breaching. Brine extraction is a viable approach for managing formation pressure, effective stress, and plume movement during industrial-scale CO2 injection projects. The main objectives of this study are to investigate suitable different pressure management strategies involving active brine extraction and passive pressure relief wells. Adaptive optimized management of CO2 storage projects utilizes the advanced automated optimization algorithms and suitable process models. The adaptive management integrates monitoring, forward modeling, inversion modeling and optimization through an iterative process. In this study, we employ an adaptive framework to understand primarily the effects of initial site characterization and frequency of the model update (calibration) and optimization calculations for controlling extraction rates based on the monitoring data on the accuracy and the success of the management without violating pressure buildup constraints in the subsurface reservoir system. We will present results of applying the adaptive framework to test appropriateness of different management strategies for a realistic field injection project.
Vicente, Tiago; Mota, José P B; Peixoto, Cristina; Alves, Paula M; Carrondo, Manuel J T
2011-01-01
The advent of advanced therapies in the pharmaceutical industry has moved the spotlight into virus-like particles and viral vectors produced in cell culture holding great promise in a myriad of clinical targets, including cancer prophylaxis and treatment. Even though a couple of cases have reached the clinic, these products have yet to overcome a number of biological and technological challenges before broad utilization. Concerning the manufacturing processes, there is significant research focusing on the optimization of current cell culture systems and, more recently, on developing scalable downstream processes to generate material for pre-clinical and clinical trials. We review the current options for downstream processing of these complex biopharmaceuticals and underline current advances on knowledge-based toolboxes proposed for rational optimization of their processing. Rational tools developed to increase the yet scarce knowledge on the purification processes of complex biologicals are discussed as alternative to empirical, "black-boxed" based strategies classically used for process development. Innovative methodologies based on surface plasmon resonance, dynamic light scattering, scale-down high-throughput screening and mathematical modeling for supporting ion-exchange chromatography show great potential for a more efficient and cost-effective process design, optimization and equipment prototyping. Copyright © 2011 Elsevier Inc. All rights reserved.
A market-based optimization approach to sensor and resource management
NASA Astrophysics Data System (ADS)
Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.
2006-05-01
Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.
Picheny, Victor; Trépos, Ronan; Casadebaig, Pierre
2017-01-01
Accounting for the interannual climatic variations is a well-known issue for simulation-based studies of environmental systems. It often requires intensive sampling (e.g., averaging the simulation outputs over many climatic series), which hinders many sequential processes, in particular optimization algorithms. We propose here an approach based on a subset selection in a large basis of climatic series, using an ad-hoc similarity function and clustering. A non-parametric reconstruction technique is introduced to estimate accurately the distribution of the output of interest using only the subset sampling. The proposed strategy is non-intrusive and generic (i.e. transposable to most models with climatic data inputs), and can be combined to most “off-the-shelf” optimization solvers. We apply our approach to sunflower ideotype design using the crop model SUNFLO. The underlying optimization problem is formulated as a multi-objective one to account for risk-aversion. Our approach achieves good performances even for limited computational budgets, outperforming significantly standard strategies. PMID:28542198
A computational approach to compare regression modelling strategies in prediction research.
Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H
2016-08-25
It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.
Power plant maintenance scheduling using ant colony optimization: an improved formulation
NASA Astrophysics Data System (ADS)
Foong, Wai Kuan; Maier, Holger; Simpson, Angus
2008-04-01
It is common practice in the hydropower industry to either shorten the maintenance duration or to postpone maintenance tasks in a hydropower system when there is expected unserved energy based on current water storage levels and forecast storage inflows. It is therefore essential that a maintenance scheduling optimizer can incorporate the options of shortening the maintenance duration and/or deferring maintenance tasks in the search for practical maintenance schedules. In this article, an improved ant colony optimization-power plant maintenance scheduling optimization (ACO-PPMSO) formulation that considers such options in the optimization process is introduced. As a result, both the optimum commencement time and the optimum outage duration are determined for each of the maintenance tasks that need to be scheduled. In addition, a local search strategy is presented in this article to boost the robustness of the algorithm. When tested on a five-station hydropower system problem, the improved formulation is shown to be capable of allowing shortening of maintenance duration in the event of expected demand shortfalls. In addition, the new local search strategy is also shown to have significantly improved the optimization ability of the ACO-PPMSO algorithm.
Wilbaux, Mélanie; Fuchs, Aline; Samardzic, Janko; Rodieux, Frédérique; Csajka, Chantal; Allegaert, Karel; van den Anker, Johannes N; Pfister, Marc
2016-08-01
Sepsis remains a major cause of mortality and morbidity in neonates, and, as a consequence, antibiotics are the most frequently prescribed drugs in this vulnerable patient population. Growth and dynamic maturation processes during the first weeks of life result in large inter- and intrasubject variability in the pharmacokinetics (PK) and pharmacodynamics (PD) of antibiotics. In this review we (1) summarize the available population PK data and models for primarily renally eliminated antibiotics, (2) discuss quantitative approaches to account for effects of growth and maturation processes on drug exposure and response, (3) evaluate current dose recommendations, and (4) identify opportunities to further optimize and personalize dosing strategies of these antibiotics in preterm and term neonates. Although population PK models have been developed for several of these drugs, exposure-response relationships of primarily renally eliminated antibiotics in these fragile infants are not well understood, monitoring strategies remain inconsistent, and consensus on optimal, personalized dosing of these drugs in these patients is absent. Tailored PK/PD studies and models are useful to better understand relationships between drug exposures and microbiological or clinical outcomes. Pharmacometric modeling and simulation approaches facilitate quantitative evaluation and optimization of treatment strategies. National and international collaborations and platforms are essential to standardize and harmonize not only studies and models but also monitoring and dosing strategies. Simple bedside decision tools assist clinical pharmacologists and neonatologists in their efforts to fine-tune and personalize the use of primarily renally eliminated antibiotics in term and preterm neonates. © 2016, The American College of Clinical Pharmacology.
Putting the process of care into practice.
Houck, S; Baum, N
1997-01-01
"Putting the process of care into practice" provides an interactive, visual model of outpatient resources and processes. It illustrates an episode of care from a fee-for-service as well as managed care perspective. The Care Process Matrix can be used for planning and staffing, as well as retrospectively to assess appropriate resource use within a practice. It identifies effective strategies for reducing the cost per episode of care and optimizing quality while moving from managing costs to managing the care process. Because of an overbuilt health care system, including an oversupply of physicians, success in the future will require redesigning the process of care and a coherent customer service strategy. The growing complexities of practice will require physicians to focus on several key competencies while outsourcing other functions such as billing and contracting.
On-line identification of fermentation processes for ethanol production.
Câmara, M M; Soares, R M; Feital, T; Naomi, P; Oki, S; Thevelein, J M; Amaral, M; Pinto, J C
2017-07-01
A strategy for monitoring fermentation processes, specifically, simultaneous saccharification and fermentation (SSF) of corn mash, was developed. The strategy covered the development and use of first principles, semimechanistic and unstructured process model based on major kinetic phenomena, along with mass and energy balances. The model was then used as a reference model within an identification procedure capable of running on-line. The on-line identification procedure consists on updating the reference model through the estimation of corrective parameters for certain reaction rates using the most recent process measurements. The strategy makes use of standard laboratory measurements for sugars quantification and in situ temperature and liquid level data. The model, along with the on-line identification procedure, has been tested against real industrial data and have been able to accurately predict the main variables of operational interest, i.e., state variables and its dynamics, and key process indicators. The results demonstrate that the strategy is capable of monitoring, in real time, this complex industrial biomass fermentation. This new tool provides a great support for decision-making and opens a new range of opportunities for industrial optimization.
USDA-ARS?s Scientific Manuscript database
To expand the biomass to fuel ethanol industry, process strategies are needed to foster the production and utilization of microorganisms which can survive and ferment both hexose (C6) and pentose (C5) sugars while exposed to inhibitors (such as ethanol, furfural, and hydroxymethylfurfural, or HMF). ...
Out of control little-used clinical assets are draining healthcare budgets.
Horblyuk, Ruslan; Kaneta, Kristopher; McMillen, Gary L; Mullins, Christopher; O'Brien, Thomas M; Roy, Ankita
2012-07-01
To improve utilization and reduce the cost of maintaining mobile clinical equipment, healthcare organization leaders should do the following: Select an initial asset group to target. Conduct a physical inventory. Evaluate the organization's asset "ecosystem." Optimize workflow processes. Phase in new processes, and phase out inventory. Devote time to change management. Develop a replacement strategy.
Defense Science and Technology Strategy
1994-09-01
I 3 IV. The Science and Technology Program .................... 15 Advanced Concept Technology Demomstrations...product and process concepts that pcrmit us to tailor, modify, and optimize the manufactUriiig process; develop sensors a-t i~a Mcrials that will detect...It can be used during concept formulations to expand the range of technical, operational, and system alternatives evaluated. The technology can
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.
Handlogten, Michael W; Lee-O'Brien, Allison; Roy, Gargi; Levitskaya, Sophia V; Venkat, Raghavan; Singh, Shailendra; Ahuja, Sanjeev
2018-01-01
A key goal in process development for antibodies is to increase productivity while maintaining or improving product quality. During process development of an antibody, titers were increased from 4 to 10 g/L while simultaneously decreasing aggregates. Process development involved optimization of media and feed formulations, feed strategy, and process parameters including pH and temperature. To better understand how CHO cells respond to process changes, the changes were implemented in a stepwise manner. The first change was an optimization of the feed formulation, the second was an optimization of the medium, and the third was an optimization of process parameters. Multiple process outputs were evaluated including cell growth, osmolality, lactate production, ammonium concentration, antibody production, and aggregate levels. Additionally, detailed assessment of oxygen uptake, nutrient and amino acid consumption, extracellular and intracellular redox environment, oxidative stress, activation of the unfolded protein response (UPR) pathway, protein disulfide isomerase (PDI) expression, and heavy and light chain mRNA expression provided an in-depth understanding of the cellular response to process changes. The results demonstrate that mRNA expression and UPR activation were unaffected by process changes, and that increased PDI expression and optimized nutrient supplementation are required for higher productivity processes. Furthermore, our findings demonstrate the role of extra- and intracellular redox environment on productivity and antibody aggregation. Processes using the optimized medium, with increased concentrations of redox modifying agents, had the highest overall specific productivity, reduced aggregate levels, and helped cells better withstand the high levels of oxidative stress associated with increased productivity. Specific productivities of different processes positively correlated to average intracellular values of total glutathione. Additionally, processes with the optimized media maintained an oxidizing intracellular environment, important for correct disulfide bond pairing, which likely contributed to reduced aggregate formation. These findings shed important understanding into how cells respond to process changes and can be useful to guide future development efforts to enhance productivity and improve product quality. © 2017 Wiley Periodicals, Inc.
Algorithms for optimizing the treatment of depression: making the right decision at the right time.
Adli, M; Rush, A J; Möller, H-J; Bauer, M
2003-11-01
Medication algorithms for the treatment of depression are designed to optimize both treatment implementation and the appropriateness of treatment strategies. Thus, they are essential tools for treating and avoiding refractory depression. Treatment algorithms are explicit treatment protocols that provide specific therapeutic pathways and decision-making tools at critical decision points throughout the treatment process. The present article provides an overview of major projects of algorithm research in the field of antidepressant therapy. The Berlin Algorithm Project and the Texas Medication Algorithm Project (TMAP) compare algorithm-guided treatments with treatment as usual. The Sequenced Treatment Alternatives to Relieve Depression Project (STAR*D) compares different treatment strategies in treatment-resistant patients.
[Lead compound optimization strategy(5) – reducing the hERG cardiac toxicity in drug development].
Zhou, Sheng-bin; Wang, Jiang; Liu, Hong
2016-10-01
The potassium channel encoded by the human ether-a-go-go related gene(hERG) plays a very important role in the physiological and pathological processes in human. hERG potassium channel determines the outward currents which facilitate the repolarization of the myocardial cells. Some drugs were withdrawn from the market for the serious side effect of long QT interval and arrhythmia due to blockade of hERG channel. The strategies for lead compound optimization are to reduce inhibitory activity of hERG potassium channel and decrease cardiac toxicity. These methods include reduction of lipophilicity and basicity of amines, introduction of hydroxyl and acidic groups, and restricting conformation.
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
Wang, Hai-Xia; Suo, Tong-Chuan; Yu, He-Shui; Li, Zheng
2016-10-01
The manufacture of traditional Chinese medicine (TCM) products is always accompanied by processing complex raw materials and real-time monitoring of the manufacturing process. In this study, we investigated different modeling strategies for the extraction process of licorice. Near-infrared spectra associate with the extraction time was used to detemine the states of the extraction processes. Three modeling approaches, i.e., principal component analysis (PCA), partial least squares regression (PLSR) and parallel factor analysis-PLSR (PARAFAC-PLSR), were adopted for the prediction of the real-time status of the process. The overall results indicated that PCA, PLSR and PARAFAC-PLSR can effectively detect the errors in the extraction procedure and predict the process trajectories, which has important significance for the monitoring and controlling of the extraction processes. Copyright© by the Chinese Pharmaceutical Association.
Product modular design incorporating preventive maintenance issues
NASA Astrophysics Data System (ADS)
Gao, Yicong; Feng, Yixiong; Tan, Jianrong
2016-03-01
Traditional modular design methods lead to product maintenance problems, because the module form of a system is created according to either the function requirements or the manufacturing considerations. For solving these problems, a new modular design method is proposed with the considerations of not only the traditional function related attributes, but also the maintenance related ones. First, modularity parameters and modularity scenarios for product modularity are defined. Then the reliability and economic assessment models of product modularity strategies are formulated with the introduction of the effective working age of modules. A mathematical model used to evaluate the difference among the modules of the product so that the optimal module of the product can be established. After that, a multi-objective optimization problem based on metrics for preventive maintenance interval different degrees and preventive maintenance economics is formulated for modular optimization. Multi-objective GA is utilized to rapidly approximate the Pareto set of optimal modularity strategy trade-offs between preventive maintenance cost and preventive maintenance interval difference degree. Finally, a coordinate CNC boring machine is adopted to depict the process of product modularity. In addition, two factorial design experiments based on the modularity parameters are constructed and analyzed. These experiments investigate the impacts of these parameters on the optimal modularity strategies and the structure of module. The research proposes a new modular design method, which may help to improve the maintainability of product in modular design.
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
Detection of MDR1 mRNA expression with optimized gold nanoparticle beacon
NASA Astrophysics Data System (ADS)
Zhou, Qiumei; Qian, Zhiyu; Gu, Yueqing
2016-03-01
MDR1 (multidrug resistance gene) mRNA expression is a promising biomarker for the prediction of doxorubicin resistance in clinic. However, the traditional technical process in clinic is complicated and cannot perform the real-time detection mRNA in living single cells. In this study, the expression of MDR1 mRNA was analyzed based on optimized gold nanoparticle beacon in tumor cells. Firstly, gold nanoparticle (AuNP) was modified by thiol-PEG, and the MDR1 beacon sequence was screened and optimized using a BLAST bioinformatics strategy. Then, optimized MDR1 molecular beacons were characterized by transmission electron microscope, UV-vis and fluorescence spectroscopies. The cytotoxicity of MDR1 molecular beacon on L-02, K562 and K562/Adr cells were investigated by MTT assay, suggesting that MDR1 molecular beacon was low inherent cytotoxicity. Dark field microscope was used to investigate the cellular uptake of hDAuNP beacon assisted with ultrasound. Finally, laser scanning confocal microscope images showed that there was a significant difference in MDR1 mRNA expression in K562 and K562/Adr cells, which was consistent with the results of q-PCR measurement. In summary, optimized MDR1 molecular beacon designed in this study is a reliable strategy for detection MDR1 mRNA expression in living tumor cells, and will be a promising strategy for in guiding patient treatment and management in individualized medication.
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Zhong, Yi; Zhu, Jieqiang; Yang, Zhenzhong; Shao, Qing; Fan, Xiaohui; Cheng, Yiyu
2018-01-31
To ensure pharmaceutical quality, chemistry, manufacturing and control (CMC) research is essential. However, due to the inherent complexity of Chinese medicine (CM), CMC study of CM remains a great challenge for academia, industry, and regulatory agencies. Recently, quality-marker (Q-marker) was proposed to establish quality standards or quality analysis approaches of Chinese medicine, which sheds a light on Chinese medicine's CMC study. Here manufacture processes of Panax Notoginseng Saponins (PNS) is taken as a case study and the present work is to establish a Q-marker based research strategy for CMC of Chinese medicine. The Q-markers of Panax Notoginseng Saponins (PNS) is selected and established by integrating chemical profile with pharmacological activities. Then, the key processes of PNS manufacturing are identified by material flow analysis. Furthermore, modeling algorithms are employed to explore the relationship between Q-markers and critical process parameters (CPPs) of the key processes. At last, CPPs of the key processes are optimized in order to improving the process efficiency. Among the 97 identified compounds, Notoginsenoside R 1 , ginsenoside Rg 1 , Re, Rb 1 and Rd are selected as the Q-markers of PNS. Our analysis on PNS manufacturing show the extraction process and column chromatography process are the key processes. With the CPPs of each process as the inputs and Q-markers' contents as the outputs, two process prediction models are built separately for the extraction process and column chromatography process of Panax notoginseng, which both possess good prediction ability. Based on the efficiency models of extraction process and column chromatography process we constructed, the optimal CPPs of both processes are calculated. Our results show that the Q-markers derived from CMC research strategy can be applied to analyze the manufacturing processes of Chinese medicine to assure product's quality and promote key processes' efficiency simultaneously. Copyright © 2018 Elsevier GmbH. All rights reserved.
Multi-model groundwater-management optimization: reconciling disparate conceptual models
NASA Astrophysics Data System (ADS)
Timani, Bassel; Peralta, Richard
2015-09-01
Disagreement among policymakers often involves policy issues and differences between the decision makers' implicit utility functions. Significant disagreement can also exist concerning conceptual models of the physical system. Disagreement on the validity of a single simulation model delays discussion on policy issues and prevents the adoption of consensus management strategies. For such a contentious situation, the proposed multi-conceptual model optimization (MCMO) can help stakeholders reach a compromise strategy. MCMO computes mathematically optimal strategies that simultaneously satisfy analogous constraints and bounds in multiple numerical models that differ in boundary conditions, hydrogeologic stratigraphy, and discretization. Shadow prices and trade-offs guide the process of refining the first MCMO-developed `multi-model strategy into a realistic compromise management strategy. By employing automated cycling, MCMO is practical for linear and nonlinear aquifer systems. In this reconnaissance study, MCMO application to the multilayer Cache Valley (Utah and Idaho, USA) river-aquifer system employs two simulation models with analogous background conditions but different vertical discretization and boundary conditions. The objective is to maximize additional safe pumping (beyond current pumping), subject to constraints on groundwater head and seepage from the aquifer to surface waters. MCMO application reveals that in order to protect the local ecosystem, increased groundwater pumping can satisfy only 40 % of projected water demand increase. To explore the possibility of increasing that pumping while protecting the ecosystem, MCMO clearly identifies localities requiring additional field data. MCMO is applicable to other areas and optimization problems than used here. Steps to prepare comparable sub-models for MCMO use are area-dependent.
Optimal waste-to-energy strategy assisted by GIS For sustainable solid waste management
NASA Astrophysics Data System (ADS)
Tan, S. T.; Hashim, H.
2014-02-01
Municipal solid waste (MSW) management has become more complex and costly with the rapid socio-economic development and increased volume of waste. Planning a sustainable regional waste management strategy is a critical step for the decision maker. There is a great potential for MSW to be used for the generation of renewable energy through waste incineration or landfilling with gas capture system. However, due to high processing cost and cost of resource transportation and distribution throughout the waste collection station and power plant, MSW is mostly disposed in the landfill. This paper presents an optimization model incorporated with GIS data inputs for MSW management. The model can design the multi-period waste-to-energy (WTE) strategy to illustrate the economic potential and tradeoffs for MSW management under different scenarios. The model is capable of predicting the optimal generation, capacity, type of WTE conversion technology and location for the operation and construction of new WTE power plants to satisfy the increased energy demand by 2025 in the most profitable way. Iskandar Malaysia region was chosen as the model city for this study.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
Distributed query plan generation using multiobjective genetic algorithm.
Panicker, Shina; Kumar, T V Vijay
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.
Distributed Query Plan Generation Using Multiobjective Genetic Algorithm
Panicker, Shina; Vijay Kumar, T. V.
2014-01-01
A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513
When teams shift among processes: insights from simulation and optimization.
Kennedy, Deanna M; McComb, Sara A
2014-09-01
This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.
He, Li; Xu, Zongda; Fan, Xing; Li, Jing; Lu, Hongwei
2017-05-01
This study develops a meta-modeling based mathematical programming approach with flexibility in environmental standards. It integrates numerical simulation, meta-modeling analysis, and fuzzy programming within a general framework. A set of models between remediation strategies and remediation performance can well guarantee the mitigation in computational efforts in the simulation and optimization process. In order to prevent the occurrence of over-optimistic and pessimistic optimization strategies, a high satisfaction level resulting from the implementation of a flexible standard can indicate the degree to which the environmental standard is satisfied. The proposed approach is applied to a naphthalene-contaminated site in China. Results show that a longer remediation period corresponds to a lower total pumping rate and a stringent risk standard implies a high total pumping rate. The wells located near or in the down-gradient direction to the contaminant sources have the most significant efficiency among all of remediation schemes.
Schneider, Adam D.; Jamali, Mohsen; Carriot, Jerome; Chacron, Maurice J.
2015-01-01
Efficient processing of incoming sensory input is essential for an organism's survival. A growing body of evidence suggests that sensory systems have developed coding strategies that are constrained by the statistics of the natural environment. Consequently, it is necessary to first characterize neural responses to natural stimuli to uncover the coding strategies used by a given sensory system. Here we report for the first time the statistics of vestibular rotational and translational stimuli experienced by rhesus monkeys during natural (e.g., walking, grooming) behaviors. We find that these stimuli can reach intensities as high as 1500 deg/s and 8 G. Recordings from afferents during naturalistic rotational and linear motion further revealed strongly nonlinear responses in the form of rectification and saturation, which could not be accurately predicted by traditional linear models of vestibular processing. Accordingly, we used linear–nonlinear cascade models and found that these could accurately predict responses to naturalistic stimuli. Finally, we tested whether the statistics of natural vestibular signals constrain the neural coding strategies used by peripheral afferents. We found that both irregular otolith and semicircular canal afferents, because of their higher sensitivities, were more optimized for processing natural vestibular stimuli as compared with their regular counterparts. Our results therefore provide the first evidence supporting the hypothesis that the neural coding strategies used by the vestibular system are matched to the statistics of natural stimuli. PMID:25855169
Optimal growth trajectories with finite carrying capacity.
Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
The optimization of total laboratory automation by simulation of a pull-strategy.
Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo
2015-01-01
Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.
Optimal growth trajectories with finite carrying capacity
NASA Astrophysics Data System (ADS)
Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.
2014-01-01
We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.
Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.
Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas
2011-06-24
This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.
CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.
2011-01-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404
CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.
Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W
2012-06-01
As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Evolution of optimal Lévy-flight strategies in human mental searches
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Baronchelli, Andrea
2012-06-01
Recent analysis of empirical data [Radicchi, Baronchelli, and Amaral, PloS ONE1932-620310.1371/journal.pone.0029910 7, e029910 (2012)] showed that humans adopt Lévy-flight strategies when exploring the bid space in online auctions. A game theoretical model proved that the observed Lévy exponents are nearly optimal, being close to the exponent value that guarantees the maximal economical return to players. Here, we rationalize these findings by adopting an evolutionary perspective. We show that a simple evolutionary process is able to account for the empirical measurements with the only assumption that the reproductive fitness of the players is proportional to their search ability. Contrary to previous modeling, our approach describes the emergence of the observed exponent without resorting to any strong assumptions on the initial searching strategies. Our results generalize earlier research, and open novel questions in cognitive, behavioral, and evolutionary sciences.
Spatial control of chemical processes on nanostructures through nano-localized water heating.
Jack, Calum; Karimullah, Affar S; Tullius, Ryan; Khorashad, Larousse Khosravi; Rodier, Marion; Fitzpatrick, Brian; Barron, Laurence D; Gadegaard, Nikolaj; Lapthorn, Adrian J; Rotello, Vincent M; Cooke, Graeme; Govorov, Alexander O; Kadodwala, Malcolm
2016-03-10
Optimal performance of nanophotonic devices, including sensors and solar cells, requires maximizing the interaction between light and matter. This efficiency is optimized when active moieties are localized in areas where electromagnetic (EM) fields are confined. Confinement of matter in these 'hotspots' has previously been accomplished through inefficient 'top-down' methods. Here we report a rapid 'bottom-up' approach to functionalize selective regions of plasmonic nanostructures that uses nano-localized heating of the surrounding water induced by pulsed laser irradiation. This localized heating is exploited in a chemical protection/deprotection strategy to allow selective regions of a nanostructure to be chemically modified. As an exemplar, we use the strategy to enhance the biosensing capabilities of a chiral plasmonic substrate. This novel spatially selective functionalization strategy provides new opportunities for efficient high-throughput control of chemistry on the nanoscale over macroscopic areas for device fabrication.
NASA Astrophysics Data System (ADS)
Mebrahitom, A.; Rizuan, D.; Azmir, M.; Nassif, M.
2016-02-01
High speed milling is one of the recent technologies used to produce mould inserts due to the need for high surface finish. It is a faster machining process where it uses a small side step and a small down step combined with very high spindle speed and feed rate. In order to effectively use the HSM capabilities, optimizing the tool path strategies and machining parameters is an important issue. In this paper, six different tool path strategies have been investigated on the surface finish and machining time of a rectangular cavities of ESR Stavax material. CAD/CAM application of CATIA V5 machining module for pocket milling of the cavities was used for process planning.
Surface processing: existing and potential applications of ultraviolet light.
Manzocco, Lara; Nicoli, Maria Cristina
2015-01-01
Solid foods represent optimal matrices for ultraviolet processing with effects well beyond nonthermal surface disinfection. UV radiation favors hormetic response in plant tissues and degradation of toxic compound on the product surface. Photoinduced reactions can also provide unexplored possibilities to steer structure and functionality of food biopolymers. The possibility to extensively exploit this technology will depend on availability of robust information about efficacious processing conditions and adequate strategies to completely and homogeneously process food surface.
Conceptual Comparison of Population Based Metaheuristics for Engineering Problems
Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265
Conceptual comparison of population based metaheuristics for engineering problems.
Adekanmbi, Oluwole; Green, Paul
2015-01-01
Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.
Cross layer optimization for cloud-based radio over optical fiber networks
NASA Astrophysics Data System (ADS)
Shao, Sujie; Guo, Shaoyong; Qiu, Xuesong; Yang, Hui; Meng, Luoming
2016-07-01
To adapt the 5G communication, the cloud radio access network is a paradigm introduced by operators which aggregates all base stations computational resources into a cloud BBU pool. The interaction between RRH and BBU or resource schedule among BBUs in cloud have become more frequent and complex with the development of system scale and user requirement. It can promote the networking demand among RRHs and BBUs, and force to form elastic optical fiber switching and networking. In such network, multiple stratum resources of radio, optical and BBU processing unit have interweaved with each other. In this paper, we propose a novel multiple stratum optimization (MSO) architecture for cloud-based radio over optical fiber networks (C-RoFN) with software defined networking. Additionally, a global evaluation strategy (GES) is introduced in the proposed architecture. MSO can enhance the responsiveness to end-to-end user demands and globally optimize radio frequency, optical spectrum and BBU processing resources effectively to maximize radio coverage. The feasibility and efficiency of the proposed architecture with GES strategy are experimentally verified on OpenFlow-enabled testbed in terms of resource occupation and path provisioning latency.
Mammalian cell culture monitoring using in situ spectroscopy: Is your method really optimised?
André, Silvère; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Duponchel, Ludovic
2017-03-01
In recent years, as a result of the process analytical technology initiative of the US Food and Drug Administration, many different works have been carried out on direct and in situ monitoring of critical parameters for mammalian cell cultures by Raman spectroscopy and multivariate regression techniques. However, despite interesting results, it cannot be said that the proposed monitoring strategies, which will reduce errors of the regression models and thus confidence limits of the predictions, are really optimized. Hence, the aim of this article is to optimize some critical steps of spectroscopic acquisition and data treatment in order to reach a higher level of accuracy and robustness of bioprocess monitoring. In this way, we propose first an original strategy to assess the most suited Raman acquisition time for the processes involved. In a second part, we demonstrate the importance of the interbatch variability on the accuracy of the predictive models with a particular focus on the optical probes adjustment. Finally, we propose a methodology for the optimization of the spectral variables selection in order to decrease prediction errors of multivariate regressions. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:308-316, 2017. © 2017 American Institute of Chemical Engineers.
Label-assisted mass spectrometry for the acceleration of reaction discovery and optimization
NASA Astrophysics Data System (ADS)
Cabrera-Pardo, Jaime R.; Chai, David I.; Liu, Song; Mrksich, Milan; Kozmin, Sergey A.
2013-05-01
The identification of new reactions expands our knowledge of chemical reactivity and enables new synthetic applications. Accelerating the pace of this discovery process remains challenging. We describe a highly effective and simple platform for screening a large number of potential chemical reactions in order to discover and optimize previously unknown catalytic transformations, thereby revealing new chemical reactivity. Our strategy is based on labelling one of the reactants with a polyaromatic chemical tag, which selectively undergoes a photoionization/desorption process upon laser irradiation, without the assistance of an external matrix, and enables rapid mass spectrometric detection of any products originating from such labelled reactants in complex reaction mixtures without any chromatographic separation. This method was successfully used for high-throughput discovery and subsequent optimization of two previously unknown benzannulation reactions.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott
2012-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
Are strategies in physics discrete? A remote controlled investigation
NASA Astrophysics Data System (ADS)
Heck, Robert; Sherson, Jacob F.; www. scienceathome. org Team; players Team
2017-04-01
In science, strategies are formulated based on observations, calculations, or physical insight. For any given physical process, often several distinct strategies are identified. Are these truly distinct or simply low dimensional representations of a high dimensional continuum of solutions? Our online citizen science platform www.scienceathome.org used by more than 150,000 people recently enabled finding solutions to fast, 1D single atom transport [Nature2016]. Surprisingly, player trajectories bunched into discrete solution strategies (clans) yielding clear, distinct physical insight. Introducing the multi-dimensional vector in the direction of other local maxima we locate narrow, high-yield ``bridges'' connecting the clans. This demonstrates for this problem that a continuum of solutions with no clear physical interpretation does in fact exist. Next, four distinct strategies for creating Bose-Einstein condensates were investigated experimentally: hybrid and crossed dipole trap configurations in combination with either large volume or dimple loading from a magnetic trap. We find that although each conventional strategy appears locally optimal, ``bridges'' can be identified. In a novel approach, the problem was gamified allowing 750 citizen scientists to contribute to the experimental optimization yielding nearly a factor two improvement in atom number.
Jadcherla, Sudarshan R; Dail, James; Malkar, Manish B; McClead, Richard; Kelleher, Kelly; Nelin, Leif
2016-07-01
We hypothesized that the implementation of a feeding quality improvement (QI) program among premature neonates accelerates feeding milestones, safely lowering hospital length of stay (LOS) compared with the baseline period. Baseline data were collected for 15 months (N = 92) prior to initiating the program, which involved development and implementation of a standardized feeding strategy in eligible premature neonates. Process optimization, implementation of feeding strategy, monitoring compliance, multidisciplinary feeding rounds, and continuous education strategies were employed. The main outcomes included the ability and duration to reach enteral feeds-120 (mL/kg/d), oral feeds-120 (mL/kg/d), and ad lib oral feeding. Balancing measures included growth velocities, comorbidities, and LOS. Comparing baseline versus feeding program (N = 92) groups, respectively, the feeding program improved the number of infants receiving trophic feeds (34% vs 80%, P < .002), trophic feeding duration (14.8 ± 10.3 days vs 7.6 ± 8.1 days, P < .0001), time to enteral feeds-120 (16.3 ± 15.4 days vs 11.4 ± 10.4 days, P < .04), time from oral feeding onset to oral feeds-120 (13.2 ± 16.7 days vs 19.5 ± 15.3 days, P < .0001), time from oral feeds-120 to ad lib feeds at discharge (22.4 ± 27.2 days vs 18.6 ± 21.3 days, P < .01), weight velocity (24 ± 6 g/d vs 27 ± 11 g/d, P < .03), and LOS (104.2 ± 51.8 vs 89.3 ± 46.0, P = .02). Mortality, readmissions within 30 days, and comorbidities were similar. Process optimization and the implementation of a standardized feeding strategy minimize practice variability, accelerating the attainment of enteral and oral feeding milestones and decreasing LOS without increasing adverse morbidities. © 2015 American Society for Parenteral and Enteral Nutrition.
Customization of ¹³C-MFA strategy according to cell culture system.
Quek, Lake-Ee; Nielsen, Lars K
2014-01-01
(13)C-MFA is far from being a simple assay for quantifying metabolic activity. It requires considerable up-front experimental planning and familiarity with the cell culture system in question, as well as optimized analytics and adequate computation frameworks. The success of a (13)C-MFA experiment is ultimately rated by the ability to accurately quantify the flux of one or more reactions of interest. In this chapter, we describe the different (13)C-MFA strategies that have been developed for the various fermentation or cell culture systems, as well as the limitations of the respective strategies. The strategies are affected by many factors and the (13)C-MFA modeling and experimental strategy must be tailored to conditions. The prevailing philosophy in the computation process is that any metabolic processes that produce significant systematic bias in the labeling pattern of the metabolites being measured must be described in the model. It is equally important to plan a labeling strategy by analytical screening or by heuristics.
Prabhu, Ashish A; Boro, Bibari; Bharali, Biju; Chakraborty, Shuchishloka; Dasu, Veeranki V
2017-01-01
Process development involving system metabolic engineering and bioprocess engineering has become one of the major thrust for the development of therapeutic proteins or enzymes. Pichia pastoris has emerged as a prominent host for the production of therapeutic protein or enzymes. Regardless of producing high protein titers, various cellular and process level bottlenecks restrict the expression of recombinant proteins in P. pastoris. In the present review, we have summarized the recent developments in the expression of foreign proteins in P. pastoris. Further, we have discussed various cellular engineering strategies which include codon optimization, pathway engineering, signal peptide processing, development of protease deficient strain and glyco-engineered strains for the high yield protein secretion of recombinant protein. Bioprocess development of recombinant proteins in large-scale bioreactor including medium optimization, optimum feeding strategy and co-substrate feeding in fed-batch as well as continuous cultivation have been described. The recent advances in system and synthetic biology studies including metabolic flux analysis in understanding the phenotypic characteristics of recombinant Pichia and genome editing with CRISPR-CAS system have also been summarized. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen
2014-01-01
Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants. PMID:25530927
Rodríguez-Yáñez, Alicia Berenice; Méndez-Vázquez, Yaileen; Cabrera-Ríos, Mauricio
2014-01-01
Process windows in injection molding are habitually built with only one performance measure in mind. In reality, a more realistic picture can be obtained when considering multiple performance measures at a time, especially in the presence of conflict. In this work, the construction of process windows for injection molding (IM) is undertaken considering two and three performance measures in conflict simultaneously. The best compromises between the criteria involved are identified through the direct application of the concept of Pareto-dominance in multiple criteria optimization. The aim is to provide a formal and realistic strategy to set processing conditions in IM operations. The resulting optimization approach is easily implementable in MS Excel. The solutions are presented graphically to facilitate their use in manufacturing plants.
Prediction of Microstructure in High-Strength Ductile Forging Parts
NASA Astrophysics Data System (ADS)
Urban, M.; Keul, C.; Back, A.; Bleck, W.; Hirt, G.
2010-06-01
Governmental, environmental and economic demands call for lighter, stiffer and at the same time cheaper products in the vehicle industry. Especially safety relevant parts have to be stiff and at the same time ductile. The strategy of this project was to improve the mechanical properties of forging steel alloys by employing a high-strength and ductile bainitic microstructure in the parts while maintaining cost effective process chains to reach these goals for high stressed forged parts. Therefore, a new steel alloy combined with an optimized process chain has been developed. To optimize the process chain with a minimum of expensive experiments, a numerical approach was developed to predict the microstructure of the steel alloy after the process chain based on FEM simulations of the forging and cooling combined with deformation-time-temperature-transformation-diagrams.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
Mery, Carlos M; Lopez, Keila N; Molossi, Silvana; Sexson-Tejtel, S Kristen; Krishnamurthy, Rajesh; McKenzie, E Dean; Fraser, Charles D; Cantor, Scott B
2016-11-01
The goal of this study was to use decision analysis to evaluate the impact of varying uncertainties on the outcomes of patients with anomalous aortic origin of a coronary artery. Two separate decision analysis models were created: one for anomalous left coronary artery (ALCA) and one for anomalous right coronary artery (ARCA). Three strategies were compared: observation, exercise restriction, and surgery. Probabilities and health utilities were estimated on the basis of existing literature. Deterministic and probabilistic sensitivity analyses were performed. Surgery was the optimal management strategy for patients <30 years of age with ALCA. As age increased, observation became an equivalent strategy and eventually surpassed surgery as the treatment of choice. The advantage on life expectancy for surgery over observation ranged from 2.6 ± 1.7 years for a 10-year-old patient to -0.03 ± 0.1 for a 65-year old patient. In patients with ARCA, observation was the optimal strategy for most patients with a life expectancy advantage over surgery of 0.1 ± 0.1 years to 0.2 ± 0.4 years, depending on age. Surgery was the preferred strategy only for patients <25 years of age when the perceived risk of sudden cardiac death was high and the perioperative mortality was low. Exercise restriction was a suboptimal strategy for both ALCA and ARCA in all scenarios. The optimal management in anomalous aortic origin of a coronary artery depends on multiple factors, including individual patient characteristics. Decision analysis provides a tool to understand how these characteristics affect the outcomes with each management strategy and thus may aid in the decision making process for a particular patient. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
48 CFR 1034.004 - Acquisition strategy.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Order, Task Order, or Interagency Agreement) to the overall investment requirements and management... investment; (3) A description of the effort, by acquisition, and the plans to include required clauses in the... requirements to manage the acquisition processes through the investment lifecycle; (7) Consideration of optimal...
Development of Chemical Process Design and Control for ...
This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy. The implemented control strategy combines a biologically inspired method with optimal control concepts for finding more sustainable operating trajectories. The sustainability assessment of process operating points is carried out by using the U.S. E.P.A.’s Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Objective Process Evaluator (GREENSCOPE) tool that provides scores for the selected indicators in the economic, material efficiency, environmental and energy areas. The indicator scores describe process performance on a sustainability measurement scale, effectively determining which operating point is more sustainable if there are more than several steady states for one specific product manufacturing. Through comparisons between a representative benchmark and the optimal steady-states obtained through implementation of the proposed controller, a systematic decision can be made in terms of whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous fermentation process for fuel production, whose materi
Wang, Yue; Chiu, Sheng-Yi; Ho, Shih-Hsin; Liu, Zhuo; Hasunuma, Tomohisa; Chang, Ting-Ting; Chang, Kuan-Fu; Chang, Jo-Shu; Ren, Nan-Qi; Kondo, Akihiko
2016-08-01
Biofuels from microalgae is now a hot issue of great potential. However, achieving high starch productivity with photoautotrophic microalgae is still challenging. A feasible approach to enhance the growth and target product of microalgae is to conduct mixotrophic cultivation. The appropriate acetate addition combined with CO2 supply as dual carbon sources (i.e., mixotrophic cultivation) could enhance the cell growth of some microalgae species, but the effect of acetate-mediated mixotrophic culture mode on carbohydrate accumulation in microalgae remains unclear. Moreover, there is still lack of the information concerning how to increase the productivity of carbohydrates from microalgae under acetate-amended mixotrophic cultivation and how to optimize the engineering strategies to achieve the goal. This study was undertaken to develop an optimal acetate-contained mixotrophic cultivation system coupled with effective operation strategies to markedly improve the carbohydrate productivity of Chlorella sorokiniana NIES-2168. The optimal carbohydrate productivity of 695 mg/L/d was obtained, which is the highest value ever reported. The monosaccharide in the accumulated carbohydrates is mainly glucose (i.e., 85-90%), which is very suitable for bio-alcohols fermentation. Hence, by applying the optimal process developed in this study, C. sorokiniana NIES-2168 has a high potential to serve as a feedstock for subsequent biofuels conversion. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal design of leak-proof SRAM cell using MCDM method
NASA Astrophysics Data System (ADS)
Wang, Qi; Kang, Sung-Mo
2003-04-01
As deep-submicron CMOS technology advances, on-chip cache has become a bottleneck on microprocessor's performance. Meanwhile, it also occupies a big percentage of processor area and consumes large power. Speed, power and area of SRAM are mutually contradicting, and not easy to be met simultaneously. Many existent leakage suppression techniques have been proposed, but they limit the circuit's performance. We apply a Multi-Criteria Decision Making strategy to perform a minimum delay-power-area optimization on SRAM circuit under some certain constraints. Based on an integrated device and circuit-level approach, we search for a process that yields a targeted composite performance. In consideration of the huge amount of simulation workload involved in the optimal design-seeking process, most of this process is automated to facilitate our goal-pursuant. With varying emphasis put on delay, power or area, different optimal SRAM designs are derived and a gate-oxide thickness scaling limit is projected. The result seems to indicate that a better composite performance could be achieved under a thinner oxide thickness. Under the derived optimal oxide thickness, the static leakage power consumption contributes less than 1% in the total power dissipation.
Contribution to the Optimization of Strategy of Maintenance by Lean Six Sigma
NASA Astrophysics Data System (ADS)
Youssouf, Ayadi; Rachid, Chaib; Ion, Verzea
The efficiency of the maintenance of the industrial systems is a major economic stake for their business concern. The main difficulties and the sources of ineffectiveness live in the choice of the actions of maintenance especially when the machine plays a vital role in the process of production. But as Algeria has embarked on major infrastructure projects in transport, housing, automobile, manufacturing industry and construction (factories, housing, highway, subway, tram, etc.) requiring new implications on maintenance strategies that meet industry requirements imposed by the exploitation. From then on and seen the importance of the maintenance on the economic market and sound impacts on the performances of the installations, methods of optimization were developed. For this purpose, to ensure the survival of businesses, be credible, contributing and competitive in the market, maintenance services must continually adapt to the progress of technical areas, technological and organizational even help maintenance managers to construct or to modify maintenance strategies, objective of this work. Our contribution in this work focuses on the optimization of maintenance for industrial systems by the use of Lean six Sigma bases. Lean Six Sigma is a method of improving the quality and profitability based on mastering statically of process and it is also a management style that based on a highly regulated organization dedicated to managing project. The method is based on five main steps summarized in the acronym (DMAIC): Define Measure, Analyze, Improve and Control. Application of the method on the maintenance processes with using maintenance methods during the five phases of the method will help to reduce costs and losses in order to strive for optimum results in terms of profit and quality.
Lu, Hongwei; Li, Jing; Ren, Lixia; Chen, Yizhong
2018-05-01
Groundwater remediation is a complicated system with time-consuming and costly challenges, which should be carefully controlled by appropriate groundwater management. This study develops an integrated optimization method for groundwater remediation management regarding cost, contamination distribution and health risk under multiple uncertainties. The integration of health risk into groundwater remediation optimization management is capable of not only adequately considering the influence of health risk on optimal remediation strategies, but also simultaneously completing remediation optimization design and risk assessment. A fuzzy chance-constrained programming approach is presented to handle multiple uncertain properties in the process of health risk assessment. The capabilities and effectiveness of the developed method are illustrated through an application of a naphthalene contaminated case in Anhui, China. Results indicate that (a) the pump-and-treat remediation system leads to a low naphthalene contamination but high remediation cost for a short-time remediation, and natural attenuation significantly affects naphthalene removal from groundwater for a long-time remediation; (b) the weighting coefficients have significant influences on the remediation cost and the performances both for naphthalene concentrations and health risks; (c) an increased level of slope factor (sf) for naphthalene corresponds to more optimal strategies characterized by higher environmental benefits and lower economic sacrifice. The developed method could be simultaneously beneficial for public health and environmental protection. Decision makers could obtain the most appropriate remediation strategies according to their specific requirements with high flexibility of economic, environmental, and risk concerns. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimization of 15 parameters influencing the long-term survival of bacteria in aquatic systems
NASA Technical Reports Server (NTRS)
Obenhuber, D. C.
1993-01-01
NASA is presently engaged in the design and development of a water reclamation system for the future space station. A major concern in processing water is the control of microbial contamination. As a means of developing an optimal microbial control strategy, studies were undertaken to determine the type and amount of contamination which could be expected in these systems under a variety of changing environmental conditions. A laboratory-based Taguchi optimization experiment was conducted to determine the ideal settings for 15 parameters which influence the survival of six bacterial species in aquatic systems. The experiment demonstrated that the bacterial survival period could be decreased significantly by optimizing environmental conditions.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
NASA Astrophysics Data System (ADS)
Porporato, A. M.
2013-05-01
We discuss the key processes by which hydrologic variability affects the probabilistic structure of soil moisture dynamics in water-controlled ecosystems. These in turn impact biogeochemical cycling and ecosystem structure through plant productivity and biodiversity as well as nitrogen availability and soil conditions. Once the long-term probabilistic structure of these processes is quantified, the results become useful to understand the impact of climatic changes and human activities on ecosystem services, and can be used to find optimal strategies of water and soil resources management under unpredictable hydro-climatic fluctuations. Particular applications regard soil salinization, phytoremediation and optimal stochastic irrigation.
A Robust Design Methodology for Optimal Microscale Secondary Flow Control in Compact Inlet Diffusers
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Keller, Dennis J.
2001-01-01
It is the purpose of this study to develop an economical Robust design methodology for microscale secondary flow control in compact inlet diffusers. To illustrate the potential of economical Robust Design methodology, two different mission strategies were considered for the subject inlet, namely Maximum Performance and Maximum HCF Life Expectancy. The Maximum Performance mission maximized total pressure recovery while the Maximum HCF Life Expectancy mission minimized the mean of the first five Fourier harmonic amplitudes, i.e., 'collectively' reduced all the harmonic 1/2 amplitudes of engine face distortion. Each of the mission strategies was subject to a low engine face distortion constraint, i.e., DC60<0.10, which is a level acceptable for commercial engines. For each of these missions strategies, an 'Optimal Robust' (open loop control) and an 'Optimal Adaptive' (closed loop control) installation was designed over a twenty degree angle-of-incidence range. The Optimal Robust installation used economical Robust Design methodology to arrive at a single design which operated over the entire angle-of-incident range (open loop control). The Optimal Adaptive installation optimized all the design parameters at each angle-of-incidence. Thus, the Optimal Adaptive installation would require a closed loop control system to sense a proper signal for each effector and modify that effector device, whether mechanical or fluidic, for optimal inlet performance. In general, the performance differences between the Optimal Adaptive and Optimal Robust installation designs were found to be marginal. This suggests, however, that Optimal Robust open loop installation designs can be very competitive with Optimal Adaptive close loop designs. Secondary flow control in inlets is inherently robust, provided it is optimally designed. Therefore, the new methodology presented in this paper, combined array 'Lower Order' approach to Robust DOE, offers the aerodynamicist a very viable and economical way of exploring the concept of Robust inlet design, where the mission variables are brought directly into the inlet design process and insensitivity or robustness to the mission variables becomes a design objective.
Li, Nailu; Mu, Anle; Yang, Xiyun; Magar, Kaman T; Liu, Chao
2018-05-01
The optimal tuning of adaptive flap controller can improve adaptive flap control performance on uncertain operating environments, but the optimization process is usually time-consuming and it is difficult to design proper optimal tuning strategy for the flap control system (FCS). To solve this problem, a novel adaptive flap controller is designed based on a high-efficient differential evolution (DE) identification technique and composite adaptive internal model control (CAIMC) strategy. The optimal tuning can be easily obtained by DE identified inverse of the FCS via CAIMC structure. To achieve fast tuning, a high-efficient modified adaptive DE algorithm is proposed with new mutant operator and varying range adaptive mechanism for the FCS identification. A tradeoff between optimized adaptive flap control and low computation cost is successfully achieved by proposed controller. Simulation results show the robustness of proposed method and its superiority to conventional adaptive IMC (AIMC) flap controller and the CAIMC flap controllers using other DE algorithms on various uncertain operating conditions. The high computation efficiency of proposed controller is also verified based on the computation time on those operating cases. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Simulation-optimization of large agro-hydrosystems using a decomposition approach
NASA Astrophysics Data System (ADS)
Schuetze, Niels; Grundmann, Jens
2014-05-01
In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.
Minimization of energy and surface roughness of the products machined by milling
NASA Astrophysics Data System (ADS)
Belloufi, A.; Abdelkrim, M.; Bouakba, M.; Rezgui, I.
2017-08-01
Metal cutting represents a large portion in the manufacturing industries, which makes this process the largest consumer of energy. Energy consumption is an indirect source of carbon footprint, we know that CO2 emissions come from the production of energy. Therefore high energy consumption requires a large production, which leads to high cost and a large amount of CO2 emissions. At this day, a lot of researches done on the Metal cutting, but the environmental problems of the processes are rarely discussed. The right selection of cutting parameters is an effective method to reduce energy consumption because of the direct relationship between energy consumption and cutting parameters in machining processes. Therefore, one of the objectives of this research is to propose an optimization strategy suitable for machining processes (milling) to achieve the optimum cutting conditions based on the criterion of the energy consumed during the milling. In this paper the problem of energy consumed in milling is solved by an optimization method chosen. The optimization is done according to the different requirements in the process of roughing and finishing under various technological constraints.
Sim, Kwang Mong; Guo, Yuanyuan; Shi, Benyun
2009-02-01
Automated negotiation provides a means for resolving differences among interacting agents. For negotiation with complete information, this paper provides mathematical proofs to show that an agent's optimal strategy can be computed using its opponent's reserve price (RP) and deadline. The impetus of this work is using the synergy of Bayesian learning (BL) and genetic algorithm (GA) to determine an agent's optimal strategy in negotiation (N) with incomplete information. BLGAN adopts: 1) BL and a deadline-estimation process for estimating an opponent's RP and deadline and 2) GA for generating a proposal at each negotiation round. Learning the RP and deadline of an opponent enables the GA in BLGAN to reduce the size of its search space (SP) by adaptively focusing its search on a specific region in the space of all possible proposals. SP is dynamically defined as a region around an agent's proposal P at each negotiation round. P is generated using the agent's optimal strategy determined using its estimations of its opponent's RP and deadline. Hence, the GA in BLGAN is more likely to generate proposals that are closer to the proposal generated by the optimal strategy. Using GA to search around a proposal generated by its current strategy, an agent in BLGAN compensates for possible errors in estimating its opponent's RP and deadline. Empirical results show that agents adopting BLGAN reached agreements successfully, and achieved: 1) higher utilities and better combined negotiation outcomes (CNOs) than agents that only adopt GA to generate their proposals, 2) higher utilities than agents that adopt BL to learn only RP, and 3) higher utilities and better CNOs than agents that do not learn their opponents' RPs and deadlines.
2012-01-01
Background Cell disruption strategies by high pressure homogenizer for the release of recombinant Hepatitis B surface antigen (HBsAg) from Pichia pastoris expression cells were optimized using response surface methodology (RSM) based on the central composite design (CCD). The factors studied include number of passes, biomass concentration and pulse pressure. Polynomial models were used to correlate the above mentioned factors to project the cell disruption capability and specific protein release of HBsAg from P. pastoris cells. Results The proposed cell disruption strategy consisted of a number of passes set at 20 times, biomass concentration of 7.70 g/L of dry cell weight (DCW) and pulse pressure at 1,029 bar. The optimized cell disruption strategy was shown to increase cell disruption efficiency by 2-fold and 4-fold for specific protein release of HBsAg when compared to glass bead method yielding 75.68% cell disruption rate (CDR) and HBsAg concentration of 29.20 mg/L respectively. Conclusions The model equation generated from RSM on cell disruption of P. pastoris was found adequate to determine the significant factors and its interactions among the process variables and the optimum conditions in releasing HBsAg when validated against a glass bead cell disruption method. The findings from the study can open up a promising strategy for better recovery of HBsAg recombinant protein during downstream processing. PMID:23039947
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1995-12-31
The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less
Rodríguez-Guerrero, Liliam; Santos-Sánchez, Omar-Jacobo; Cervantes-Escorcia, Nicolás; Romero, Hugo
2017-11-01
This article presents a suboptimal control strategy with finite horizon for affine nonlinear discrete systems with both state and input delays. The Dynamic Programming Approach is used to obtain the suboptimal control sequence, but in order to avoid the computation of the Bellman functional, a numerical approximation of this function is proposed in every step. The feasibility of our proposal is demonstrated via an experimental test on a dehydration process and the obtained results show a good performance and behavior of this process. Then in order to demonstrate the benefits of using this kind of control strategy, the results are compared with a non optimal control strategy, particularly with respect to results produced by an industrial Proportional Integral Derivative (PID) Honeywell controller, which is tuned using the Ziegler-Nichols method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.
2017-12-01
Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.
Optimizing antibody expression: The nuts and bolts.
Ayyar, B Vijayalakshmi; Arora, Sushrut; Ravi, Shiva Shankar
2017-03-01
Antibodies are extensively utilized entities in biomedical research, and in the development of diagnostics and therapeutics. Many of these applications require high amounts of antibodies. However, meeting this ever-increasing demand of antibodies in the global market is one of the outstanding challenges. The need to maintain a balance between demand and supply of antibodies has led the researchers to discover better means and methods for optimizing their expression. These strategies aim to increase the volumetric productivity of the antibodies along with the reduction of associated manufacturing costs. Recent years have witnessed major advances in recombinant protein technology, owing to the introduction of novel cloning strategies, gene manipulation techniques, and an array of cell and vector engineering techniques, together with the progress in fermentation technologies. These innovations were also highly beneficial for antibody expression. Antibody expression depends upon the complex interplay of multiple factors that may require fine tuning at diverse levels to achieve maximum yields. However, each antibody is unique and requires individual consideration and customization for optimizing the associated expression parameters. This review provides a comprehensive overview of several state-of-the-art approaches, such as host selection, strain engineering, codon optimization, gene optimization, vector modification and process optimization that are deemed suitable for enhancing antibody expression. Copyright © 2017 Elsevier Inc. All rights reserved.
''Illusion of control'' in Time-Horizon Minority and Parrondo Games
NASA Astrophysics Data System (ADS)
Satinover, J. B.; Sornette, D.
2007-12-01
Human beings like to believe they are in control of their destiny. This ubiquitous trait seems to increase motivation and persistence, and is probably evolutionarily adaptive [J.D. Taylor, S.E. Brown, Psych. Bull. 103, 193 (1988); A. Bandura, Self-efficacy: the exercise of control (WH Freeman, New York, 1997)]. But how good really is our ability to control? How successful is our track record in these areas? There is little understanding of when and under what circumstances we may over-estimate [E. Langer, J. Pers. Soc. Psych. 7, 185 (1975)] or even lose our ability to control and optimize outcomes, especially when they are the result of aggregations of individual optimization processes. Here, we demonstrate analytically using the theory of Markov Chains and by numerical simulations in two classes of games, the Time-Horizon Minority Game [M.L. Hart, P. Jefferies, N.F. Johnson, Phys. A 311, 275 (2002)] and the Parrondo Game [J.M.R. Parrondo, G.P. Harmer, D. Abbott, Phys. Rev. Lett. 85, 5226 (2000); J.M.R. Parrondo, How to cheat a bad mathematician (ISI, Italy, 1996)], that agents who optimize their strategy based on past information may actually perform worse than non-optimizing agents. In other words, low-entropy (more informative) strategies under-perform high-entropy (or random) strategies. This provides a precise definition of the “illusion of control” in certain set-ups a priori defined to emphasize the importance of optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgor, R.J.; Feehery, W.F.; Tolsma, J.E.
The batch process development problem serves as good candidate to guide the development of process modeling environments. It demonstrates that very robust numerical techniques are required within an environment that can collect, organize, and maintain the data and models required to address the batch process development problem. This paper focuses on improving the robustness and efficiency of the numerical algorithms required in such a modeling environment through the development of hybrid numerical and symbolic strategies.
Promoting Executive Function in the Classroom
ERIC Educational Resources Information Center
Meltzer, Lynn
2010-01-01
Accessible and practical, this book helps teachers incorporate executive function processes--such as planning, organizing, prioritizing, and self-checking--into the classroom curriculum. Chapters provide effective strategies for optimizing what K-12 students learn by improving how they learn. Noted authority Lynn Meltzer and her research…
A design space exploration for control of Critical Quality Attributes of mAb.
Bhatia, Hemlata; Read, Erik; Agarabi, Cyrus; Brorson, Kurt; Lute, Scott; Yoon, Seongkyu
2016-10-15
A unique "design space (DSp) exploration strategy," defined as a function of four key scenarios, was successfully integrated and validated to enhance the DSp building exercise, by increasing the accuracy of analyses and interpretation of processed data. The four key scenarios, defining the strategy, were based on cumulative analyses of individual models developed for the Critical Quality Attributes (23 Glycan Profiles) considered for the study. The analyses of the CQA estimates and model performances were interpreted as (1) Inside Specification/Significant Model (2) Inside Specification/Non-significant Model (3) Outside Specification/Significant Model (4) Outside Specification/Non-significant Model. Each scenario was defined and illustrated through individual models of CQA aligning the description. The R(2), Q(2), Model Validity and Model Reproducibility estimates of G2, G2FaGbGN, G0 and G2FaG2, respectively, signified the four scenarios stated above. Through further optimizations, including the estimation of Edge of Failure and Set Point Analysis, wider and accurate DSps were created for each scenario, establishing critical functional relationship between Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs). A DSp provides the optimal region for systematic evaluation, mechanistic understanding and refining of a QbD approach. DSp exploration strategy will aid the critical process of consistently and reproducibly achieving predefined quality of a product throughout its lifecycle. Copyright © 2016 Elsevier B.V. All rights reserved.
You Look Familiar: How Malaysian Chinese Recognize Faces
Tan, Chrystalle B. Y.; Stephen, Ian D.; Whitehead, Ross; Sheppard, Elizabeth
2012-01-01
East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar. PMID:22253762
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
Design and Control of Integrated Systems for Hydrogen Production and Power Generation
NASA Astrophysics Data System (ADS)
Georgis, Dimitrios
Growing concerns on CO2 emissions have led to the development of highly efficient power plants. Options for increased energy efficiencies include alternative energy conversion pathways, energy integration and process intensification. Solid oxide fuel cells (SOFC) constitute a promising alternative for power generation since they convert the chemical energy electrochemically directly to electricity. Their high operating temperature shows potential for energy integration with energy intensive units (e.g. steam reforming reactors). Although energy integration is an essential tool for increased efficiencies, it leads to highly complex process schemes with rich dynamic behavior, which are challenging to control. Furthermore, the use of process intensification for increased energy efficiency imposes an additional control challenge. This dissertation identifies and proposes solutions on design, operational and control challenges of integrated systems for hydrogen production and power generation. Initially, a study on energy integrated SOFC systems is presented. Design alternatives are identified, control strategies are proposed for each alternative and their validity is evaluated under different operational scenarios. The operational range of the proposed control strategies is also analyzed. Next, thermal management of water gas shift membrane reactors, which are a typical application of process intensification, is considered. Design and operational objectives are identified and a control strategy is proposed employing advanced control algorithms. The performance of the proposed control strategy is evaluated and compared with classical control strategies. Finally SOFC systems for combined heat and power applications are considered. Multiple recycle loops are placed to increase design flexibility. Different operational objectives are identified and a nonlinear optimization problem is formulated. Optimal designs are obtained and their features are discussed and compared. The results of the dissertation provide a deeper understanding on the design, operational and control challenges of the above systems and can potentially guide further commercialization efforts. In addition to this, the results can be generalized and used for applications from the transportation and residential sector to large--scale power plants.
OPTIMIZATION OF INTEGRATED URBAN WET-WEATHER CONTROL STRATEGIES
An optimization method for urban wet weather control (WWC) strategies is presented. The developed optimization model can be used to determine the most cost-effective strategies for the combination of centralized storage-release systems and distributed on-site WWC alternatives. T...
Sun, Shichang; Bao, Zhiyuan; Sun, Dezhi
2015-03-01
Given the inexorable increase in global wastewater treatment, increasing amounts of nitrous oxide are expected to be emitted from wastewater treatment plants and released to the atmosphere. It has become imperative to study the emission and control of nitrous oxide in the various wastewater treatment processes currently in use. In the present investigation, the emission characteristics and the factors affecting the release of nitrous oxide were studied via full- and pilot-scale experiments in anoxic-oxic, sequencing batch reactor and oxidation ditch processes. We propose an optimal treatment process and relative strategy for nitrous oxide reduction. Our results show that both the bio-nitrifying and bio-denitrifying treatment units in wastewater treatment plants are the predominant sites for nitrous oxide production in each process, while the aerated treatment units are the critical sources for nitrous oxide emission. Compared with the emission of nitrous oxide from the anoxic-oxic (1.37% of N-influent) and sequencing batch reactor (2.69% of N-influent) processes, much less nitrous oxide (0.25% of N-influent) is emitted from the oxidation ditch process, which we determined as the optimal wastewater treatment process for nitrous oxide reduction, given the current technologies. Nitrous oxide emissions differed with various operating parameters. Controlling the dissolved oxygen concentration at a proper level during nitrification and denitrification and enhancing the utilization rate of organic carbon in the influent for denitrification are the two critical methods for nitrous oxide reduction in the various processes considered.
The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan
2016-05-01
Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.
Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers
NASA Astrophysics Data System (ADS)
Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.
A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.
2012-09-30
influences on TC structure evolve up to landfall or extratropical transition. In particular, winds derived from geostationary satellites have been shown... extratropical transition, it is clear that a dedicated research effort is needed to optimize the satellite data processing strategies, assimilation...and applications to better understand the behavior of the near- storm environmental flow fields during these evolutionary TC stages. To our knowledge
2011-09-30
influences on TC structure evolve up to landfall or extratropical transition. In particular, winds derived from geostationary satellites have been...and extratropical transition, it is clear that a dedicated research effort is needed to optimize the satellite data processing strategies...assimilation, and applications to better understand the behavior of the near- storm environmental flow fields during these evolutionary TC stages. To our
Systems metabolic engineering strategies for the production of amino acids.
Ma, Qian; Zhang, Quanwei; Xu, Qingyang; Zhang, Chenglin; Li, Yanjun; Fan, Xiaoguang; Xie, Xixian; Chen, Ning
2017-06-01
Systems metabolic engineering is a multidisciplinary area that integrates systems biology, synthetic biology and evolutionary engineering. It is an efficient approach for strain improvement and process optimization, and has been successfully applied in the microbial production of various chemicals including amino acids. In this review, systems metabolic engineering strategies including pathway-focused approaches, systems biology-based approaches, evolutionary approaches and their applications in two major amino acid producing microorganisms: Corynebacterium glutamicum and Escherichia coli, are summarized.
Informing the Uninformed: Optimizing the Consent Message Using a Fractional Factorial Design
Tait, Alan R.; Voepel-Lewis, Terri; Nair, Vijayan N.; Narisetty, Naveen N.; Fagerlin, Angela
2013-01-01
Objective Research information should be presented in a manner that promotes understanding. However, many parents and research subjects have difficulty understanding and making informed decisions. This study was designed to examine the effect of different communication strategies on parental understanding of research information. Participants 640 parents of children scheduled for elective surgery Design Observational study using a fractional factorial design Setting Large tertiary care children's hospital Interventions Parents were randomized to receive information about a hypothetical pain trial presented in one of 16 consent documents containing different combinations of 5 selected communication strategies (i.e., length, readability, processability [formatting], graphical display, and supplemental verbal disclosure). Main outcome measures Parents were interviewed to determine their understanding of the study elements (e.g., protocol, alternatives etc.) and their gist (main point) and verbatim (actual) understanding of the risks and benefits. Results Main effects for understanding were found for processability, readability, message length, use of graphics, and verbal discussion. Consent documents with high processability, 8th grade reading level, and graphics resulted in significantly greater gist and verbatim understanding compared with forms without these attributes (mean difference, 95% CI = 0.57, 0.26–0.88, correct responses out of 7 and 0.54, 0.20–0.88 correct responses out of 4 for gist and verbatim, respectively). Conclusions Results identified several communication strategy combinations that improved parents' understanding of research information. Adoption of these active strategies by investigators, clinicians, IRBs, and study sponsors represents a simple, practical, and inexpensive means to optimize the consent message and enhance parental, participant, and patient understanding. PMID:23700028
Cai, Y L; Zhang, S X; Yang, P C; Lin, Y
2016-06-01
Through cost-benefit analysis (CBA), cost-effectiveness analysis (CEA) and quantitative optimization analysis to understand the economic benefit and outcomes of strategy regarding preventing mother-to-child transmission (PMTCT) on hepatitis B virus. Based on the principle of Hepatitis B immunization decision analytic-Markov model, strategies on PMTCT and universal vaccination were compared. Related parameters of Shenzhen were introduced to the model, a birth cohort was set up as the study population in 2013. The net present value (NPV), benefit-cost ratio (BCR), incremental cost-effectiveness ratio (ICER) were calculated and the differences between CBA and CEA were compared. A decision tree was built as the decision analysis model for hepatitis B immunization. Three kinds of Markov models were used to simulate the outcomes after the implementation of vaccination program. The PMTCT strategy of Shenzhen showed a net-gain as 38 097.51 Yuan/per person in 2013, with BCR as 14.37. The universal vaccination strategy showed a net-gain as 37 083.03 Yuan/per person, with BCR as 12.07. Data showed that the PMTCT strategy was better than the universal vaccination one and would end with gaining more economic benefit. When comparing with the universal vaccination program, the PMTCT strategy would save 85 100.00 Yuan more on QALY gains for every person. The PMTCT strategy seemed more cost-effective compared with the one under universal vaccination program. In the CBA and CEA hepatitis B immunization programs, the immunization coverage rate and costs of hepatitis B related diseases were the most important influencing factors. Outcomes of joint-changes of all the parameters in CEA showed that PMTCT strategy was a more cost-effective. The PMTCT strategy gained more economic benefit and effects on health. However, the cost of PMTCT strategy was more than the universal vaccination program, thus it is important to pay attention to the process of PMTCT strategy and the universal vaccination program. CBA seemed suitable for strategy optimization while CEA was better for strategy evaluation. Hopefully, programs as combination of the above said two methods would facilitate the process of economic evaluation.
A model of interaction between anticorruption authority and corruption groups
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neverova, Elena G.; Malafeyef, Oleg A.
The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.
Rejuvenating Strategies for Stem Cell-based Therapies in Aging
Neves, Joana; Sousa-Victor, Pedro; Jasper, Heinrich
2017-01-01
SUMMARY Recent advances in our understanding of tissue regeneration and the development of efficient approaches to induce and differentiate pluripotent stem cells for cell replacement therapies promise exciting avenues for treating degenerative age-related diseases. However, clinical studies and insights from model organisms have identified major roadblocks that normal aging processes impose on tissue regeneration. These new insights suggest that specific targeting of environmental niche components, including growth factors, ECM and immune cells, and intrinsic stem cell properties that are affected by aging will be critical for development of new strategies to improve stem cell function and optimize tissue repair processes. PMID:28157498
Optimal Hotspots of Dynamic Surfaced-Enhanced Raman Spectroscopy for Drugs Quantitative Detection.
Yan, Xiunan; Li, Pan; Zhou, Binbin; Tang, Xianghu; Li, Xiaoyun; Weng, Shizhuang; Yang, Liangbao; Liu, Jinhuai
2017-05-02
Surface-enhanced Raman spectroscopy (SERS) as a powerful qualitative analysis method has been widely applied in many fields. However, SERS for quantitative analysis still suffers from several challenges partially because of the absence of stable and credible analytical strategy. Here, we demonstrate that the optimal hotspots created from dynamic surfaced-enhanced Raman spectroscopy (D-SERS) can be used for quantitative SERS measurements. In situ small-angle X-ray scattering was carried out to in situ real-time monitor the formation of the optimal hotspots, where the optimal hotspots with the most efficient hotspots were generated during the monodisperse Au-sol evaporating process. Importantly, the natural evaporation of Au-sol avoids the nanoparticles instability of salt-induced, and formation of ordered three-dimensional hotspots allows SERS detection with excellent reproducibility. Considering SERS signal variability in the D-SERS process, 4-mercaptopyridine (4-mpy) acted as internal standard to validly correct and improve stability as well as reduce fluctuation of signals. The strongest SERS spectra at the optimal hotspots of D-SERS have been extracted to statistics analysis. By using the SERS signal of 4-mpy as a stable internal calibration standard, the relative SERS intensity of target molecules demonstrated a linear response versus the negative logarithm of concentrations at the point of strongest SERS signals, which illustrates the great potential for quantitative analysis. The public drugs 3,4-methylenedioxymethamphetamine and α-methyltryptamine hydrochloride obtained precise analysis with internal standard D-SERS strategy. As a consequence, one has reason to believe our approach is promising to challenge quantitative problems in conventional SERS analysis.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Topology-dependent density optima for efficient simultaneous network exploration
NASA Astrophysics Data System (ADS)
Wilson, Daniel B.; Baker, Ruth E.; Woodhouse, Francis G.
2018-06-01
A random search process in a networked environment is governed by the time it takes to visit every node, termed the cover time. Often, a networked process does not proceed in isolation but competes with many instances of itself within the same environment. A key unanswered question is how to optimize this process: How many concurrent searchers can a topology support before the benefits of parallelism are outweighed by competition for space? Here, we introduce the searcher-averaged parallel cover time (APCT) to quantify these economies of scale. We show that the APCT of the networked symmetric exclusion process is optimized at a searcher density that is well predicted by the spectral gap. Furthermore, we find that nonequilibrium processes, realized through the addition of bias, can support significantly increased density optima. Our results suggest alternative hybrid strategies of serial and parallel search for efficient information gathering in social interaction and biological transport networks.
Efficient extraction strategies of tea (Camellia sinensis) biomolecules.
Banerjee, Satarupa; Chatterjee, Jyotirmoy
2015-06-01
Tea is a popular daily beverage worldwide. Modulation and modifications of its basic components like catechins, alkaloids, proteins and carbohydrate during fermentation or extraction process changes organoleptic, gustatory and medicinal properties of tea. Through these processes increase or decrease in yield of desired components are evident. Considering the varied impacts of parameters in tea production, storage and processes that affect the yield, extraction of tea biomolecules at optimized condition is thought to be challenging. Implementation of technological advancements in green chemistry approaches can minimize the deviation retaining maximum qualitative properties in environment friendly way. Existed extraction processes with optimization parameters of tea have been discussed in this paper including its prospects and limitations. This exhaustive review of various extraction parameters, decaffeination process of tea and large scale cost effective isolation of tea components with aid of modern technology can assist people to choose extraction condition of tea according to necessity.
Study on loading coefficient in steam explosion process of corn stalk.
Sui, Wenjie; Chen, Hongzhang
2015-03-01
The object of this work was to evaluate the effect of loading coefficient on steam explosion process and efficacy of corn stalk. Loading coefficient's relation with loading pattern and material property was first revealed, then its effect on transfer process and pretreatment efficacy of steam explosion was assessed by established models and enzymatic hydrolysis tests, respectively, in order to propose its optimization strategy for improving the process economy. Results showed that loading coefficient was mainly determined by loading pattern, moisture content and chip size. Both compact loading pattern and low moisture content improved the energy efficiency of steam explosion pretreatment and overall sugar yield of pretreated materials, indicating that they are desirable to improve the process economy. Pretreatment of small chip size showed opposite effects in pretreatment energy efficiency and enzymatic hydrolysis performance, thus its optimization should be balanced in investigated aspects according to further techno-economical evaluation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Implications of holistic face processing in autism and schizophrenia
Watson, Tamara L.
2013-01-01
People with autism and schizophrenia have been shown to have a local bias in sensory processing and face recognition difficulties. A global or holistic processing strategy is known to be important when recognizing faces. Studies investigating face recognition in these populations are reviewed and show that holistic processing is employed despite lower overall performance in the tasks used. This implies that holistic processing is necessary but not sufficient for optimal face recognition and new avenues for research into face recognition based on network models of autism and schizophrenia are proposed. PMID:23847581
Efficient receiver tuning using differential evolution strategies
NASA Astrophysics Data System (ADS)
Wheeler, Caleb H.; Toland, Trevor G.
2016-08-01
Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias
A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less
Design and Printing Strategies in 3D Bioprinting of Cell-Hydrogels: A Review.
Lee, Jia Min; Yeong, Wai Yee
2016-11-01
Bioprinting is an emerging technology that allows the assembling of both living and non-living biological materials into an ideal complex layout for further tissue maturation. Bioprinting aims to produce engineered tissue or organ in a mechanized, organized, and optimized manner. Various biomaterials and techniques have been utilized to bioprint biological constructs in different shapes, sizes and resolutions. There is a need to systematically discuss and analyze the reported strategies employed to fabricate these constructs. We identified and discussed important design factors in bioprinting, namely shape and resolution, material heterogeneity, and cellular-material remodeling dynamism. Each design factors are represented by the corresponding process capabilities and printing parameters. The process-design map will inspire future biomaterials research in these aspects. Design considerations such as data processing, bio-ink formulation and process selection are discussed. Various printing and crosslinking strategies, with relevant applications, are also systematically reviewed. We categorized them into 5 general bioprinting strategies, including direct bioprinting, in-process crosslinking, post-process crosslinking, indirect bioprinting and hybrid bioprinting. The opportunities and outlook in 3D bioprinting are highlighted. This review article will serve as a framework to advance computer-aided design in bioprinting technologies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai
2014-08-01
This article investigates the open-shop scheduling problem with the optimal criterion of minimising the sum of quadratic completion times. For this NP-hard problem, the asymptotic optimality of the shortest processing time block (SPTB) heuristic is proven in the sense of limit. Moreover, three different improvements, namely, the job-insert scheme, tabu search and genetic algorithm, are introduced to enhance the quality of the original solution generated by the SPTB heuristic. At the end of the article, a series of numerical experiments demonstrate the convergence of the heuristic, the performance of the improvements and the effectiveness of the quadratic objective.
Peng, Jiansheng; Meng, Fanmei; Ai, Yuncan
2013-06-01
The artificial neural network (ANN) and genetic algorithm (GA) were combined to optimize the fermentation process for enhancing production of marine bacteriocin 1701 in a 5-L-stirred-tank. Fermentation time, pH value, dissolved oxygen level, temperature and turbidity were used to construct a "5-10-1" ANN topology to identify the nonlinear relationship between fermentation parameters and the antibiotic effects (shown as in inhibition diameters) of bacteriocin 1701. The predicted values by the trained ANN model were coincided with the observed ones (the coefficient of R(2) was greater than 0.95). As the fermentation time was brought in as one of the ANN input nodes, fermentation parameters could be optimized by stages through GA, and an optimal fermentation process control trajectory was created. The production of marine bacteriocin 1701 was significantly improved by 26% under the guidance of fermentation control trajectory that was optimized by using of combined ANN-GA method. Copyright © 2013 Elsevier Ltd. All rights reserved.
Chemistry challenges in lead optimization: silicon isosteres in drug discovery.
Showell, Graham A; Mills, John S
2003-06-15
During the lead optimization phase of drug discovery projects, the factors contributing to subsequent failure might include poor portfolio decision-making and a sub-optimal intellectual property (IP) position. The pharmaceutical industry has an ongoing need for new, safe medicines with a genuine biomedical benefit, a clean IP position and commercial viability. Inherent drug-like properties and chemical tractability are also essential for the smooth development of such agents. The introduction of bioisosteres, to improve the properties of a molecule and obtain new classes of compounds without prior art in the patent literature, is a key strategy used by medicinal chemists during the lead optimization process. Sila-substitution (C/Si exchange) of existing drugs is an approach to search for new drug-like candidates that have beneficial biological properties and a clear IP position. Some of the fundamental differences between carbon and silicon can lead to marked alterations in the physicochemical and biological properties of the silicon-containing analogues and the resulting benefits can be exploited in the drug design process.
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
Optimization and real-time control for laser treatment of heterogeneous soft tissues.
Feng, Yusheng; Fuentes, David; Hawkins, Andrea; Bass, Jon M; Rylander, Marissa Nichole
2009-01-01
Predicting the outcome of thermotherapies in cancer treatment requires an accurate characterization of the bioheat transfer processes in soft tissues. Due to the biological and structural complexity of tumor (soft tissue) composition and vasculature, it is often very difficult to obtain reliable tissue properties that is one of the key factors for the accurate treatment outcome prediction. Efficient algorithms employing in vivo thermal measurements to determine heterogeneous thermal tissues properties in conjunction with a detailed sensitivity analysis can produce essential information for model development and optimal control. The goals of this paper are to present a general formulation of the bioheat transfer equation for heterogeneous soft tissues, review models and algorithms developed for cell damage, heat shock proteins, and soft tissues with nanoparticle inclusion, and demonstrate an overall computational strategy for developing a laser treatment framework with the ability to perform real-time robust calibrations and optimal control. This computational strategy can be applied to other thermotherapies using the heat source such as radio frequency or high intensity focused ultrasound.
Optimal trading from minimizing the period of bankruptcy risk
NASA Astrophysics Data System (ADS)
Liehr, S.; Pawelzik, K.
2001-04-01
Assuming that financial markets behave similar to random walk processes we derive a trading strategy with variable investment which is based on the equivalence of the period of bankruptcy risk and the risk to profit ratio. We define a state dependent predictability measure which can be attributed to the deterministic and stochastic components of the price dynamics. The influence of predictability variations and especially of short term inefficiency structures on the optimal amount of investment is analyzed in the given context and a method for adaptation of a trading system to the proposed objective function is presented. Finally we show the performance of our trading strategy on the DAX and S&P 500 as examples for real world data using different types of prediction models in comparison.
NASA Astrophysics Data System (ADS)
Li, W.; Shao, H.
2017-12-01
For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Fragment-based design of kinase inhibitors: a practical guide.
Erickson, Jon A
2015-01-01
Fragment-based drug design has become an important strategy for drug design and development over the last decade. It has been used with particular success in the development of kinase inhibitors, which are one of the most widely explored classes of drug targets today. The application of fragment-based methods to discovering and optimizing kinase inhibitors can be a complicated and daunting task; however, a general process has emerged that has been highly fruitful. Here a practical outline of the fragment process used in kinase inhibitor design and development is laid out with specific examples. A guide to the overall process from initial discovery through fragment screening, including the difficulties in detection, to the computational methods available for use in optimization of the discovered fragments is reported.
Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad
2015-10-01
A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.
Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner
2013-06-01
The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dey, Pinaki; Rangarajan, Vivek
2017-10-01
Experimental investigations were carried out for Cupriavidus necator (MTCC 1472)-based improved production of poly-3 hydroxy butyrate (PHB) through induced nitrogen limiting fed-batch cultivation strategies. Initially Plackett-Burman design and response surface methodology were implemented to optimize most influencing process parameters. With optimized process parameter values, continuous feeding strategies ware applied in a 5-l fermenter with table sugar concentration of 100 g/l, nitrogen concentration of 0.12 g/l for fed-batch fermentation with varying dilution rates of 0.02 and 0.046 1/h. To get enriched production of PHB, concentration of the sugar was further increased to 150 and 200 g/l in feeding. Maximum concentrations of PHB achieved were 22.35 and 23.07 g/l at those dilution rates when sugar concentration maintains at 200 g/l in feeding. At maximum concentration of PHB (23.07 g/l), productivity of 0.58 g/l h was achieved with maximum PHB accumulation efficiency up to 64% of the dry weight of biomass. High purity of PHB, close to medical grade was achieved after surfactant hypochlorite extraction method, and it was further confirmed by SEM, EDX, and XRD studies.
Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.
Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan
2016-08-01
In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Technology-design-manufacturing co-optimization for advanced mobile SoCs
NASA Astrophysics Data System (ADS)
Yang, Da; Gan, Chock; Chidambaram, P. R.; Nallapadi, Giri; Zhu, John; Song, S. C.; Xu, Jeff; Yeap, Geoffrey
2014-03-01
How to maintain the Moore's Law scaling beyond the 193 immersion resolution limit is the key question semiconductor industry needs to answer in the near future. Process complexity will undoubtfully increase for 14nm node and beyond, which brings both challenges and opportunities for technology development. A vertically integrated design-technologymanufacturing co-optimization flow is desired to better address the complicated issues new process changes bring. In recent years smart mobile wireless devices have been the fastest growing consumer electronics market. Advanced mobile devices such as smartphones are complex systems with the overriding objective of providing the best userexperience value by harnessing all the technology innovations. Most critical system drivers are better system performance/power efficiency, cost effectiveness, and smaller form factors, which, in turns, drive the need of system design and solution with More-than-Moore innovations. Mobile system-on-chips (SoCs) has become the leading driver for semiconductor technology definition and manufacturing. Here we highlight how the co-optimization strategy influenced architecture, device/circuit, process technology and package, in the face of growing process cost/complexity and variability as well as design rule restrictions.
USDA-ARS?s Scientific Manuscript database
Starch negatively affects the quantity and quality of raw sugar produced. Starch reduces crystallization and centrifugation rates, occludes into sucrose crystals, and impedes refinery decolorization processes. The problem of starch in sugarcane juice has been exacerbated by the widespread adoption...
Activating Generative Learning in Organizations through Optimizing Relational Strategies
ERIC Educational Resources Information Center
Park, Mary Kay
2010-01-01
Using a grounded theory method, this dissertation seeks to discover how relationships impact organizational generative learning. An organization is a socially constructed reality and organizational learning is situated in the process of co-participation. To discover the link between relationships and generative learning this study considers the…
International Management: Creating a More Realistic Global Planning Environment.
ERIC Educational Resources Information Center
Waldron, Darryl G.
2000-01-01
Discusses the need for realistic global planning environments in international business education, introducing a strategic planning model that has teams interacting with teams to strategically analyze a selected multinational company. This dynamic process must result in a single integrated written analysis that specifies an optimal strategy for…
A novel methodology to characterize interfacility transfer strategies in a trauma transfer network.
Gomez, David; Haas, Barbara; Larsen, Kristian; Alali, Aziz S; MacDonald, Russell D; Singh, Jeffrey M; Tien, Homer; Iwashyna, Theodore J; Rubenfeld, Gordon; Nathens, Avery B
2016-10-01
More than half of severely injured patients are initially transported from the scene of injury to nontrauma centers (NTCs), with many requiring subsequent transfer to trauma center (TC) care. Definitive care in the setting of severe injury is time sensitive. However, transferring severely injured patients from an NTC is a complex process often fraught with delays. Selection of the receiving TC and the mode of interfacility transport both strongly influence total transfer time and are highly amenable to quality improvement initiatives. We analyzed transfer strategies, defined as the pairing of a destination and mode of transport (land vs. rotary wing vs. fixed wing), for severely injured adult patients. Existing transfer strategies at each NTC were derived from trauma registry data. Geographic Information Systems network analysis was used to identify the strategy that minimized transfer times the most as well as alternate strategies (+15 or +30 minutes) for each NTC. Transfer network efficiency was characterized based on optimality and stability. We identified 7,702 severely injured adult patients transferred from 146 NTCs to 9 TCs. Nontrauma centers transferred severely injured patients to a median of 3 (interquartile range, 1-4) different TCs and utilized a median of 4 (interquartile range, 2-6) different transfer strategies. After allowing for the use of alternate transfer strategies, 73.1% of severely injured patients were transported using optimal/alternate strategies, and only 40.4% of NTCs transferred more than 90% of patients using an optimal/alternate transfer strategy. Three quarters (75.5%) of transfers occurred between NTCs and their most common receiving TC. More than a quarter of patients with severe traumatic injuries undergoing interfacility transport to a TC in Ontario are consistently transported using a nonoptimal combination of destination and mode of transport. Our novel analytic approach can be easily adapted to different system configurations and provides actionable data that can be provided to NTCs and other stakeholders. Therapeutic study, level IV.
Numerical grid generation in computational field simulations. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, B.K.; Thompson, J.F.; Haeuser, J.
1996-12-31
To enhance the CFS technology to its next level of applicability (i.e., to create acceptance of CFS in an integrated product and process development involving multidisciplinary optimization) the basic requirements are: rapid turn-around time, reliable and accurate simulation, affordability and appropriate linkage to other engineering disciplines. In response to this demand, there has been a considerable growth in the grid generation related research activities involving automization, parallel processing, linkage with the CAD-CAM systems, CFS with dynamic motion and moving boundaries, strategies and algorithms associated with multi-block structured, unstructured, hybrid, hexahedral, and Cartesian grids, along with its applicability to various disciplinesmore » including biomedical, semiconductor, geophysical, ocean modeling, and multidisciplinary optimization.« less
Mixed-Strategy Chance Constrained Optimal Control
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
2013-09-30
TC structure evolve up to landfall or extratropical transition. In particular, winds derived from geostationary satellites have been shown to be an... extratropical transition, it is clear that a dedicated research effort is needed to optimize the satellite data processing strategies, assimilation, and...applications to better understand the behavior of the near- storm environmental flow fields during these evolutionary TC stages. To our knowledge, this
Dither and drizzle strategies for Wide Field Camera 3
NASA Astrophysics Data System (ADS)
Mutchler, Max
2010-07-01
Hubble's 20th anniversary observation of Herbig-Haro object HH 901 in the Carina Nebula is used to illustrate observing strategies and corresponding data reduction methods for the new Wide Field Camera 3 (WFC3), which was installed during Servicing Mission 4 in May 2009. The key issues for obtaining optimal results with offline Multidrizzle processing of WFC3 data sets are presented. These pragmatic instructions in "cookbook" format are designed to help new WFC3 users quickly obtain good results with similar data sets.
Self-organization, collective decision making and resource exploitation strategies in social insects
NASA Astrophysics Data System (ADS)
Nicolis, S. C.; Dussutour, A.
2008-10-01
Amplifying communications are a ubiquitous characteristic of group-living animals. This work is concerned with their role in the processes of food recruitment and resource exploitation by social insects. The collective choices made by ants faced with different food sources are analyzed using both a mean field description and a stochastic approach. Emphasis is placed on the possibility of optimizing the recruitment and exploitation strategies through an appropriate balance between individual variability, cooperative interactions and environmental constraints.
Palmieri, Roberta; Bonifazi, Giuseppe; Serranti, Silvia
2014-11-01
This study characterizes the composition of plastic frames and printed circuit boards from end-of-life mobile phones. This knowledge may help define an optimal processing strategy for using these items as potential raw materials. Correct handling of such a waste is essential for its further "sustainable" recovery, especially to maximize the extraction of base, rare and precious metals, minimizing the environmental impact of the entire process chain. A combination of electronic and chemical imaging techniques was thus examined, applied and critically evaluated in order to optimize the processing, through the identification and the topological assessment of the materials of interest and their quantitative distribution. To reach this goal, end-of-life mobile phone derived wastes have been systematically characterized adopting both "traditional" (e.g. scanning electronic microscopy combined with microanalysis and Raman spectroscopy) and innovative (e.g. hyperspectral imaging in short wave infrared field) techniques, with reference to frames and printed circuit boards. Results showed as the combination of both the approaches (i.e. traditional and classical) could dramatically improve recycling strategies set up, as well as final products recovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
Games With Estimation of Non-Damage Objectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canavan, G.H.
1998-09-14
Games against nature illustrate the role of non-damage objectives in producing conflict with uncertain rewards and the role of probing and estimation in reducing that uncertainty and restoring optimal strategies. This note discusses two essential elements of the analysis of crisis stability omitted from current treatments based on first strike stability: the role of an objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential interactions that could cause those conflicts to deepen. Games against nature illustrate role of objectives and uncertainty that are at the core of detailed treatments of crisis stability. These modelsmore » can also illustrate how these games processes can generate and deepen crises and the optimal strategies that might be used to end them. This note discusses two essential elements of the analysis of crisis stability that are omitted from current treatments based on first strike stability: anon-damage objective that motivates conflicts sufficiently serious to lead to conflicts, and the process of sequential tests that could cause those conflicts to deepen. The model used is a game against nature, simplified sufficiently to make the role of each of those elements obvious.« less
Optimal vaccination strategies and rational behaviour in seasonal epidemics.
Doutor, Paulo; Rodrigues, Paula; Soares, Maria do Céu; Chalub, Fabio A C C
2016-12-01
We consider a SIRS model with time dependent transmission rate. We assume time dependent vaccination which confers the same immunity as natural infection. We study two types of vaccination strategies: (i) optimal vaccination, in the sense that it minimizes the effort of vaccination in the set of vaccination strategies for which, for any sufficiently small perturbation of the disease free state, the number of infectious individuals is monotonically decreasing; (ii) Nash-equilibria strategies where all individuals simultaneously minimize the joint risk of vaccination versus the risk of the disease. The former case corresponds to an optimal solution for mandatory vaccinations, while the second corresponds to the equilibrium to be expected if vaccination is fully voluntary. We are able to show the existence of both optimal and Nash strategies in a general setting. In general, these strategies will not be functions but Radon measures. For specific forms of the transmission rate, we provide explicit formulas for the optimal and the Nash vaccination strategies.
Optimal Keno Strategies and the Central Limit Theorem
ERIC Educational Resources Information Center
Johnson, Roger W.
2006-01-01
For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…
Fast Optimization of LiMgMnOx/La2O3 Catalysts for the Oxidative Coupling of Methane.
Li, Zhinian; He, Lei; Wang, Shenliang; Yi, Wuzhong; Zou, Shihui; Xiao, Liping; Fan, Jie
2017-01-09
The development of efficient catalyst for oxidative coupling of methane (OCM) reaction represents a grand challenge in direct conversion of methane into other useful products. Here, we reported that a newly developed combinatorial approach can be used for ultrafast optimization of La 2 O 3 -based multicomponent metal oxide catalysts in OCM reaction. This new approach integrated inkjet printing assisted synthesis (IJP-A) with multidimensional group testing strategy (m-GT) tactfully takes the place of conventionally high-throughput synthesis-and-screen experiment. Just within a week, 2048 formulated LiMgMnO x -La 2 O 3 catalysts in a 64·8·8·8·8 = 262 144 compositional space were fabricated by IJP-A in a four-round synthesis-and-screen process, and an optimized formulation has been successfully identified through only 4·8 = 32 times of tests via m-GT screening strategy. The screening process identifies the most promising ternary composition region is Li 0-0.48 Mg 0-6.54 Mn 0-0.62 -La 100 O x with an external C 2 yield of 10.87% at 700 °C. The yield of C 2 is two times as high as the pure nano-La 2 O 3 . The good performance of the optimized catalyst formulation has been validated by the manual preparation, which further prove the effectiveness of the new combinatorial methodology in fast discovery of heterogeneous catalyst.
Sin, Wai Jack; Nai, Mui Ling Sharon; Wei, Jun
2017-01-01
As one of the powder bed fusion additive manufacturing technologies, electron beam melting (EBM) is gaining more and more attention due to its near-net-shape production capacity with low residual stress and good mechanical properties. These characteristics also allow EBM built parts to be used as produced without post-processing. However, the as-built rough surface introduces a detrimental influence on the mechanical properties of metallic alloys. Thereafter, understanding the effects of processing parameters on the part’s surface roughness, in turn, becomes critical. This paper has focused on varying the processing parameters of two types of contouring scanning strategies namely, multispot and non-multispot, in EBM. The results suggest that the beam current and speed function are the most significant processing parameters for non-multispot contouring scanning strategy. While for multispot contouring scanning strategy, the number of spots, spot time, and spot overlap have greater effects than focus offset and beam current. The improved surface roughness has been obtained in both contouring scanning strategies. Furthermore, non-multispot contouring scanning strategy gives a lower surface roughness value and poorer geometrical accuracy than the multispot counterpart under the optimized conditions. These findings could be used as a guideline for selecting the contouring type used for specific industrial parts that are built using EBM. PMID:28937638
Bio-mimic optimization strategies in wireless sensor networks: a survey.
Adnan, Md Akhtaruzzaman; Abdur Razzaque, Mohammd; Ahmed, Ishtiaque; Isnin, Ismail Fauzi
2013-12-24
For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted.
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
Strategies to optimize the use of marginal donors in liver transplantation
Pezzati, Daniele; Ghinolfi, Davide; De Simone, Paolo; Balzano, Emanuele; Filipponi, Franco
2015-01-01
Liver transplantation is the treatment of choice for end stage liver disease, but availability of liver grafts is still the main limitation to its wider use. Extended criteria donors (ECD) are considered not ideal for several reasons but their use has dramatically grown in the last decades in order to augment the donor liver pool. Due to improvement in surgical and medical strategies, results using grafts from these donors have become acceptable in terms of survival and complications; nevertheless a big debate still exists regarding their selection, discharge criteria and allocation policies. Many studies analyzed the use of these grafts from many points of view producing different or contradictory results so that accepted guidelines do not exist and the use of these grafts is still related to non-standardized policies changing from center to center. The aim of this review is to analyze every step of the donation-transplantation process emphasizing all those strategies, both clinical and experimental, that can optimize results using ECD. PMID:26609341
Shukla, Chinmay A
2017-01-01
The implementation of automation in the multistep flow synthesis is essential for transforming laboratory-scale chemistry into a reliable industrial process. In this review, we briefly introduce the role of automation based on its application in synthesis viz. auto sampling and inline monitoring, optimization and process control. Subsequently, we have critically reviewed a few multistep flow synthesis and suggested a possible control strategy to be implemented so that it helps to reliably transfer the laboratory-scale synthesis strategy to a pilot scale at its optimum conditions. Due to the vast literature in multistep synthesis, we have classified the literature and have identified the case studies based on few criteria viz. type of reaction, heating methods, processes involving in-line separation units, telescopic synthesis, processes involving in-line quenching and process with the smallest time scale of operation. This classification will cover the broader range in the multistep synthesis literature. PMID:28684977
Active learning methods for interactive image retrieval.
Gosselin, Philippe Henri; Cord, Matthieu
2008-07-01
Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.
Strategy of restraining ripple error on surface for optical fabrication.
Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen
2014-09-10
The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8 nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.
NASA Astrophysics Data System (ADS)
Remund, Stefan M.; Jaeggi, Beat; Kramer, Thorsten; Neuenschwander, Beat
2017-03-01
The resulting surface roughness and waviness after processing with ultra-short pulsed laser radiation depend on the laser parameters as well as on the machining strategy and the scanning system. However the results depend on the material and its initial surface quality and finishing as well. The improvement of surface finishing represents effort and produces additional costs. For industrial applications it is important to reduce the preparation of a workpiece for laser micro-machining to optimize quality and reduce costs. The effects of the ablation process and the influence of the machining strategy and scanning system onto the surface roughness and waviness can be differenced due to their separate manner. By using the optimal laser parameters on an initially perfect surface, the ablation process mainly increases the roughness to a certain value for most metallic materials. However, imperfections in the scanning system causing a slight variation in the scanning speed lead to a raise of the waviness on the sample surface. For a basic understanding of the influence of grinding marks, the sample surfaces were initially furnished with regular grooves of different depths and spatial frequencies to gain a homogenous and well-defined original surface. On these surfaces the effect of different beam waists and machining strategy are investigated and the results are compared with a simulation of the process. Furthermore the behaviors of common surface finishes used in industrial applications for laser micro-machining are studied and the relation onto the resulting surface roughness and waviness is presented.
Wang, Ruifei; Unrean, Pornkamol; Franzén, Carl Johan
2016-01-01
High content of water-insoluble solids (WIS) is required for simultaneous saccharification and co-fermentation (SSCF) operations to reach the high ethanol concentrations that meet the techno-economic requirements of industrial-scale production. The fundamental challenges of such processes are related to the high viscosity and inhibitor contents of the medium. Poor mass transfer and inhibition of the yeast lead to decreased ethanol yield, titre and productivity. In the present work, high-solid SSCF of pre-treated wheat straw was carried out by multi-feed SSCF which is a fed-batch process with additions of substrate, enzymes and cells, integrated with yeast propagation and adaptation on the pre-treatment liquor. The combined feeding strategies were systematically compared and optimized using experiments and simulations. For high-solid SSCF process of SO2-catalyzed steam pre-treated wheat straw, the boosted solubilisation of WIS achieved by having all enzyme loaded at the beginning of the process is crucial for increased rates of both enzymatic hydrolysis and SSCF. A kinetic model was adapted to simulate the release of sugars during separate hydrolysis as well as during SSCF. Feeding of solid substrate to reach the instantaneous WIS content of 13 % (w/w) was carried out when 60 % of the cellulose was hydrolysed, according to simulation results. With this approach, accumulated WIS additions reached more than 20 % (w/w) without encountering mixing problems in a standard bioreactor. Feeding fresh cells to the SSCF reactor maintained the fermentation activity, which otherwise ceased when the ethanol concentration reached 40-45 g L(-1). In lab scale, the optimized multi-feed SSCF produced 57 g L(-1) ethanol in 72 h. The process was reproducible and resulted in 52 g L(-1) ethanol in 10 m(3) scale at the SP Biorefinery Demo Plant. SSCF of WIS content up to 22 % (w/w) is reproducible and scalable with the multi-feed SSCF configuration and model-aided process design. For simultaneous saccharification and fermentation, the overall efficiency relies on balanced rates of substrate feeding and conversion. Multi-feed SSCF provides the possibilities to balance interdependent rates by systematic optimization of the feeding strategies. The optimization routine presented in this work can easily be adapted for optimization of other lignocellulose-based fermentation systems.
An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.
Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang
2017-11-01
This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Fox, Matthew D.
Advanced automotive technology assessment and powertrain design are increasingly performed through modeling, simulation, and optimization. But technology assessments usually target many competing criteria making any individual optimization challenging and arbitrary. Further, independent design simulations and optimizations take considerable time to execute, and design constraints and objectives change throughout the design process. Changes in design considerations usually require re-processing of simulations and more time. In this thesis, these challenges are confronted through CSU's participation in the EcoCAR2 hybrid vehicle design competition. The complexity of the competition's design objectives leveraged development of a decision support system tool to aid in multi-criteria decision making across technologies and to perform powertrain optimization. To make the decision support system interactive, and bypass the problem of long simulation times, a new approach was taken. The result of this research is CSU's architecture selection and component sizing, which optimizes a composite objective function representing the competition score. The selected architecture is an electric vehicle with an onboard range extending hydrogen fuel cell system. The vehicle has a 145kW traction motor, 18.9kWh of lithium ion battery, a 15kW fuel cell system, and 5kg of hydrogen storage capacity. Finally, a control strategy was developed that improves the vehicles performance throughout the driving range under variable driving conditions. In conclusion, the design process used in this research is reviewed and evaluated against other common design methodologies. I conclude, through the highlighted case studies, that the approach is more comprehensive than other popular design methodologies and is likely to lead to a higher quality product. The upfront modeling work and decision support system formulation will pay off in superior and timely knowledge transfer and more informed design decisions. The hypothesis is supported by the three case studies examined in this thesis.
Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido
2015-02-01
In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre
2010-06-01
The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.
Design of optimal groundwater remediation systems under flexible environmental-standard constraints.
Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing
2015-01-01
In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.
Modeling and Advanced Control for Sustainable Process ...
This book chapter introduces a novel process systems engineering framework that integrates process control with sustainability assessment tools for the simultaneous evaluation and optimization of process operations. The implemented control strategy consists of a biologically-inspired, multi-agent-based method. The sustainability and performance assessment of process operating points is carried out using the U.S. E.P.A.’s GREENSCOPE assessment tool that provides scores for the selected economic, material management, environmental and energy indicators. The indicator results supply information on whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous bioethanol fermentation process whose dynamics are characterized by steady-state multiplicity and oscillatory behavior. This book chapter contribution demonstrates the application of novel process control strategies for sustainability by increasing material management, energy efficiency, and pollution prevention, as needed for SHC Sustainable Uses of Wastes and Materials Management.
Stillwater Hybrid Geo-Solar Power Plant Optimization Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Daniel S.; Mines, Gregory L.; Turchi, Craig S.
2015-09-02
The Stillwater Power Plant is the first hybrid plant in the world able to bring together a medium-enthalpy geothermal unit with solar thermal and solar photovoltaic systems. Solar field and power plant models have been developed to predict the performance of the Stillwater geothermal / solar-thermal hybrid power plant. The models have been validated using operational data from the Stillwater plant. A preliminary effort to optimize performance of the Stillwater hybrid plant using optical characterization of the solar field has been completed. The Stillwater solar field optical characterization involved measurement of mirror reflectance, mirror slope error, and receiver position error.more » The measurements indicate that the solar field may generate 9% less energy than the design value if an appropriate tracking offset is not employed. A perfect tracking offset algorithm may be able to boost the solar field performance by about 15%. The validated Stillwater hybrid plant models were used to evaluate hybrid plant operating strategies including turbine IGV position optimization, ACC fan speed and turbine IGV position optimization, turbine inlet entropy control using optimization of multiple process variables, and mixed working fluid substitution. The hybrid plant models predict that each of these operating strategies could increase net power generation relative to the baseline Stillwater hybrid plant operations.« less
[Quality by design approaches for pharmaceutical development and manufacturing of Chinese medicine].
Xu, Bing; Shi, Xin-Yuan; Wu, Zhi-Sheng; Zhang, Yan-Ling; Wang, Yun; Qiao, Yan-Jiang
2017-03-01
The pharmaceutical quality was built by design, formed in the manufacturing process and improved during the product's lifecycle. Based on the comprehensive literature review of pharmaceutical quality by design (QbD), the essential ideas and implementation strategies of pharmaceutical QbD were interpreted. Considering the complex nature of Chinese medicine, the "4H" model was innovated and proposed for implementing QbD in pharmaceutical development and industrial manufacture of Chinese medicine product. "4H" corresponds to the acronym of holistic design, holistic information analysis, holistic quality control, and holistic process optimization, which is consistent with the holistic concept of Chinese medicine theory. The holistic design aims at constructing both the quality problem space from the patient requirement and the quality solution space from multidisciplinary knowledge. Holistic information analysis emphasizes understanding the quality pattern of Chinese medicine by integrating and mining multisource data and information at a relatively high level. The batch-to-batch quality consistence and manufacturing system reliability can be realized by comprehensive application of inspective quality control, statistical quality control, predictive quality control and intelligent quality control strategies. Holistic process optimization is to improve the product quality and process capability during the product lifecycle management. The implementation of QbD is useful to eliminate the ecosystem contradictions lying in the pharmaceutical development and manufacturing process of Chinese medicine product, and helps guarantee the cost effectiveness. Copyright© by the Chinese Pharmaceutical Association.
Springback effects during single point incremental forming: Optimization of the tool path
NASA Astrophysics Data System (ADS)
Giraud-Moreau, Laurence; Belchior, Jérémy; Lafon, Pascal; Lotoing, Lionel; Cherouat, Abel; Courtielle, Eric; Guines, Dominique; Maurine, Patrick
2018-05-01
Incremental sheet forming is an emerging process to manufacture sheet metal parts. This process is more flexible than conventional one and well suited for small batch production or prototyping. During the process, the sheet metal blank is clamped by a blank-holder and a small-size smooth-end hemispherical tool moves along a user-specified path to deform the sheet incrementally. Classical three-axis CNC milling machines, dedicated structure or serial robots can be used to perform the forming operation. Whatever the considered machine, large deviations between the theoretical shape and the real shape can be observed after the part unclamping. These deviations are due to both the lack of stiffness of the machine and residual stresses in the part at the end of the forming stage. In this paper, an optimization strategy of the tool path is proposed in order to minimize the elastic springback induced by residual stresses after unclamping. A finite element model of the SPIF process allowing the shape prediction of the formed part with a good accuracy is defined. This model, based on appropriated assumptions, leads to calculation times which remain compatible with an optimization procedure. The proposed optimization method is based on an iterative correction of the tool path. The efficiency of the method is shown by an improvement of the final shape.
Design optimization for active twist rotor blades
NASA Astrophysics Data System (ADS)
Mok, Ji Won
This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to explore the nonlinear design space of complex planform. Especially for this case, detailed design is carried out to make the actual blade manufacturable. The proposed optimization framework is shown to be an effective tool to design high authority active twist blades to reduce vibration in future helicopter rotor blades.
Online adaptation and over-trial learning in macaque visuomotor control.
Braun, Daniel A; Aertsen, Ad; Paz, Rony; Vaadia, Eilon; Rotter, Stefan; Mehring, Carsten
2011-01-01
When faced with unpredictable environments, the human motor system has been shown to develop optimized adaptation strategies that allow for online adaptation during the control process. Such online adaptation is to be contrasted to slower over-trial learning that corresponds to a trial-by-trial update of the movement plan. Here we investigate the interplay of both processes, i.e., online adaptation and over-trial learning, in a visuomotor experiment performed by macaques. We show that simple non-adaptive control schemes fail to perform in this task, but that a previously suggested adaptive optimal feedback control model can explain the observed behavior. We also show that over-trial learning as seen in learning and aftereffect curves can be explained by learning in a radial basis function network. Our results suggest that both the process of over-trial learning and the process of online adaptation are crucial to understand visuomotor learning.
Online Adaptation and Over-Trial Learning in Macaque Visuomotor Control
Braun, Daniel A.; Aertsen, Ad; Paz, Rony; Vaadia, Eilon; Rotter, Stefan; Mehring, Carsten
2011-01-01
When faced with unpredictable environments, the human motor system has been shown to develop optimized adaptation strategies that allow for online adaptation during the control process. Such online adaptation is to be contrasted to slower over-trial learning that corresponds to a trial-by-trial update of the movement plan. Here we investigate the interplay of both processes, i.e., online adaptation and over-trial learning, in a visuomotor experiment performed by macaques. We show that simple non-adaptive control schemes fail to perform in this task, but that a previously suggested adaptive optimal feedback control model can explain the observed behavior. We also show that over-trial learning as seen in learning and aftereffect curves can be explained by learning in a radial basis function network. Our results suggest that both the process of over-trial learning and the process of online adaptation are crucial to understand visuomotor learning. PMID:21720526
Optimization of chlorine fluxing process for magnesium removal from molten aluminum
NASA Astrophysics Data System (ADS)
Fu, Qian
High-throughput and low operational cost are the keys to a successful industrial process. Much aluminum is now recycled in the form of used beverage cans and this aluminum is of alloys that contain high levels of magnesium. It is common practice to "demag" the metal by injecting chlorine that preferentially reacts with the magnesium. In the conventional chlorine fluxing processes, low reaction efficiency results in excessive reactive gas emissions. In this study, through an experimental investigation of the reaction kinetics involved in this process, a mathematical model is set up for the purpose of process optimization. A feedback controlled chlorine reduction process strategy is suggested for demagging the molten aluminum to the desired magnesium level without significant gas emissions. This strategy also needs the least modification of the existing process facility. The suggested process time will only be slightly longer than conventional methods and chlorine usage and emissions will be reduced. In order to achieve process optimization through novel designs in any fluxing process, a system is necessary for measuring the bubble distribution in liquid metals. An electro-resistivity probe described in the literature has low accuracy and its capability to measure bubble distribution has not yet been fully demonstrated. A capacitance bubble probe was designed for bubble measurements in molten metals. The probe signal was collected and processed digitally. Higher accuracy was obtained by higher discrimination against corrupted signals. A single-size bubble experiment in Belmont metal was designed to reveal the characteristic response of the capacitance probe. This characteristic response fits well with a theoretical model. It is suggested that using a properly designed deconvolution process, the actual bubble size distribution can be calculated. The capacitance probe was used to study some practical bubble generation devices. Preliminary results on bubble distribution generated by a porous plug in Belmont metal showed bubbles much bigger than those in a water model. Preliminary results in molten aluminum showed that the probe was applicable in this harsh environment. An interesting bubble coalescence phenomenon was also observed in both Belmont metal and molten aluminum.
Henshall, Chris; Schuller, Tara; Mardhani-Bayne, Logan
2012-07-01
Health systems face rising patient expectations and economic pressures; decision makers seek to enhance efficiency to improve access to appropriate care. There is international interest in the role of HTA to support decisions to optimize use of established technologies, particularly in "disinvesting" from low-benefit uses. This study summarizes main points from an HTAi Policy Forum meeting on this topic, drawing on presentations, discussions among attendees, and an advance background paper. Optimization involves assessment or re-assessment of a technology, a decision on optimal use, and decision implementation. This may occur within a routine process to improve safety and quality and create "headroom" for new technologies, or ad hoc in response to financial constraints. The term "disinvestment" is not always helpful in describing these processes. HTA contributes to optimization, but there is scope to increase its role in many systems. Stakeholders may have strong views on access to technology, and stakeholder involvement is essential. Optimization faces challenges including loss aversion and entitlement, stakeholder inertia and entrenchment, heterogeneity in patient outcomes, and the need to demonstrate convincingly absence of benefit. While basic HTA principles remain applicable, methodological developments are needed better to support optimization. These include mechanisms for candidate technology identification and prioritization, enhanced collection and analysis of routine data, and clinician engagement. To maximize value to decision makers, HTA should consider implementation strategies and barriers. Improving optimization processes calls for a coordinated approach, and actions are identified for system leaders, HTA and other health organizations, and industry.
Logistical constraints lead to an intermediate optimum in outbreak response vaccination
Shea, Katriona; Ferrari, Matthew
2018-01-01
Dynamic models in disease ecology have historically evaluated vaccination strategies under the assumption that they are implemented homogeneously in space and time. However, this approach fails to formally account for operational and logistical constraints inherent in the distribution of vaccination to the population at risk. Thus, feedback between the dynamic processes of vaccine distribution and transmission might be overlooked. Here, we present a spatially explicit, stochastic Susceptible-Infected-Recovered-Vaccinated model that highlights the density-dependence and spatial constraints of various diffusive strategies of vaccination during an outbreak. The model integrates an agent-based process of disease spread with a partial differential process of vaccination deployment. We characterize the vaccination response in terms of a diffusion rate that describes the distribution of vaccination to the population at risk from a central location. This generates an explicit trade-off between slow diffusion, which concentrates effort near the central location, and fast diffusion, which spreads a fixed vaccination effort thinly over a large area. We use stochastic simulation to identify the optimum vaccination diffusion rate as a function of population density, interaction scale, transmissibility, and vaccine intensity. Our results show that, conditional on a timely response, the optimal strategy for minimizing outbreak size is to distribute vaccination resource at an intermediate rate: fast enough to outpace the epidemic, but slow enough to achieve local herd immunity. If the response is delayed, however, the optimal strategy for minimizing outbreak size changes to a rapidly diffusive distribution of vaccination effort. The latter may also result in significantly larger outbreaks, thus suggesting a benefit of allocating resources to timely outbreak detection and response. PMID:29791432
Optimal PGU operation strategy in CHP systems
NASA Astrophysics Data System (ADS)
Yun, Kyungtae
Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.
Zhan, Tiannan; Ali, Ayman; Choi, Jin G; Lee, Minyi; Leung, John; Dellon, Evan S; Garber, John J; Hur, Chin
2018-05-03
Elimination diets are effective treatments for eosinophilic esophagitis (EoE), but foods that activate esophagitis are identified empirically, via a process that involves multiple esophagogastroduodenoscopies (EGDs). No optimized approach has been developed to identify foods that activate EoE. We aimed to compare clinical strategies to provide data to guide treatment. We developed a computer-based simulation model to determine the optimal empiric elimination strategy based on reported prevalence values for foods that activate EoE. These were identified in a systematic review, searching PubMed through October 1, 2017 for prospective and retrospective studies of EoE and diet. Each patient in our virtual cohort was assigned profile comprising as many as 12 foods known to induce EoE, including dairy, wheat, eggs, soy, nuts, seafood, beef, corn, chicken, potato, pork, and/or rice. To balance the strategy success rate with the number of EGDs required for food identification, we applied an efficiency frontier approach. Strategies on the frontier were the most efficient, requiring fewer EGDs for higher or equivalent success rates relative to their comparable, neighboring strategies. In all simulations, we found the 1,4,8-food and 1,3-food strategies to be the most efficient in identifying foods that induce EoE, resulting in the highest rate of the correct identification of food triggers balanced by the number of EGDs required to complete the food elimination strategy. Both strategies begin with elimination of dairy; if EoE remission is not achieved, the 1,3 diet proceeds to eliminate wheat and eggs in addition to dairy, and the 1,4,8 strategy removes wheat, eggs, dairy, and soy. In the case of persistent EoE after the second round of food elimination, the 1,3-food strategy terminates, whereas the 1,4,8-food diet eliminates corn, chicken, beef, and pork. The 1,4,8-food resulted in correct identification of foods that activated esophagitis in 76.68% of patients, with a mean 4.13 EGDs and a median 6 EGDs. The 1,3-food strategy identified foods that activated esophagitis in 42.76% of patients, with a mean of 3.36 EGDs and a median 2 EGDs required. In a modeling analysis, we found the 1,4,8-food and 1,3-food elimination strategies to be the most efficient in detection of foods that induce EoE in patients, the 1,4,8-food strategy was optimal, requiring a mean of only 4.13 EGDs for food identification. However, the ideal elimination strategy will vary based on clinical priorities. Additional research on specific foods that induce EoE are needed to confirm the predictions of this model. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice.
Howes, Andrew; Duggan, Geoffrey B; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L
2016-07-01
It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are boundedly optimal. Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and external resources are boundedly optimal where optimality is defined relative to experience, cognitive constraints, and reward. The theory makes predictions that are tested against data, not fitted to it. The experiments use a no-choice/choice utility learning paradigm where the no-choice phase is used to elicit a profile of each participant's performance across the strategy space and the choice phase is used to test predicted choices within this space. They show that the majority of individuals select strategies that are boundedly optimal. Further, individual differences in what people choose to do are successfully predicted by the analysis. Two issues are discussed: (a) the performance of the minority of participants who did not find boundedly optimal adaptations, and (b) the possibility that individuals anticipate what, with practice, will become a bounded optimal strategy, rather than what is boundedly optimal during training. Copyright © 2015 Cognitive Science Society, Inc.
2012-01-01
Background Elementary mode (EM) analysis is ideally suited for metabolic engineering as it allows for an unbiased decomposition of metabolic networks in biologically meaningful pathways. Recently, constrained minimal cut sets (cMCS) have been introduced to derive optimal design strategies for strain improvement by using the full potential of EM analysis. However, this approach does not allow for the inclusion of regulatory information. Results Here we present an alternative, novel and simple method for the prediction of cMCS, which allows to account for boolean transcriptional regulation. We use binary linear programming and show that the design of a regulated, optimal metabolic network of minimal functionality can be formulated as a standard optimization problem, where EM and regulation show up as constraints. We validated our tool by optimizing ethanol production in E. coli. Our study showed that up to 70% of the predicted cMCS contained non-enzymatic, non-annotated reactions, which are difficult to engineer. These cMCS are automatically excluded by our approach utilizing simple weight functions. Finally, due to efficient preprocessing, the binary program remains computationally feasible. Conclusions We used integer programming to predict efficient deletion strategies to metabolically engineer a production organism. Our formulation utilizes the full potential of cMCS but adds additional flexibility to the design process. In particular our method allows to integrate regulatory information into the metabolic design process and explicitly favors experimentally feasible deletions. Our method remains manageable even if millions or potentially billions of EM enter the analysis. We demonstrated that our approach is able to correctly predict the most efficient designs for ethanol production in E. coli. PMID:22898474
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Pusic, Martin V.; LeBlanc, Vicki; Patel, Vimla L.
2001-01-01
Traditional task analysis for instructional design has emphasized the importance of precisely defining behavioral educational objectives and working back to select objective-appropriate instructional strategies. However, this approach may miss effective strategies. Cognitive task analysis, on the other hand, breaks a process down into its component knowledge representations. Selection of instructional strategies based on all such representations in a domain is likely to lead to optimal instructional design. In this demonstration, using the interpretation of cervical spine x-rays as an educational example, we show how a detailed cognitive task analysis can guide the development of computer-aided instruction.
Properties of heuristic search strategies
NASA Technical Reports Server (NTRS)
Vanderbrug, G. J.
1973-01-01
A directed graph is used to model the search space of a state space representation with single input operators, an AND/OR is used for problem reduction representations, and a theorem proving graph is used for state space representations with multiple input operators. These three graph models and heuristic strategies for searching them are surveyed. The completeness, admissibility, and optimality properties of search strategies which use the evaluation function f = (1 - omega)g = omega(h) are presented and interpreted using a representation of the search process in the plane. The use of multiple output operators to imply dependent successors, and thus obtain a formalism which includes all three types of representations, is discussed.
Bandwidth auction for SVC streaming in dynamic multi-overlay
NASA Astrophysics Data System (ADS)
Xiong, Yanting; Zou, Junni; Xiong, Hongkai
2010-07-01
In this paper, we study the optimal bandwidth allocation for scalable video coding (SVC) streaming in multiple overlays. We model the whole bandwidth request and distribution process as a set of decentralized auction games between the competing peers. For the upstream peer, a bandwidth allocation mechanism is introduced to maximize the aggregate revenue. For the downstream peer, a dynamic bidding strategy is proposed. It achieves maximum utility and efficient resource usage by collaborating with a content-aware layer dropping/adding strategy. Also, the convergence of the proposed auction games is theoretically proved. Experimental results show that the auction strategies can adapt to dynamic join of competing peers and video layers.
Vivekanandan, T; Sriman Narayana Iyengar, N Ch
2017-11-01
Enormous data growth in multiple domains has posed a great challenge for data processing and analysis techniques. In particular, the traditional record maintenance strategy has been replaced in the healthcare system. It is vital to develop a model that is able to handle the huge amount of e-healthcare data efficiently. In this paper, the challenging tasks of selecting critical features from the enormous set of available features and diagnosing heart disease are carried out. Feature selection is one of the most widely used pre-processing steps in classification problems. A modified differential evolution (DE) algorithm is used to perform feature selection for cardiovascular disease and optimization of selected features. Of the 10 available strategies for the traditional DE algorithm, the seventh strategy, which is represented by DE/rand/2/exp, is considered for comparative study. The performance analysis of the developed modified DE strategy is given in this paper. With the selected critical features, prediction of heart disease is carried out using fuzzy AHP and a feed-forward neural network. Various performance measures of integrating the modified differential evolution algorithm with fuzzy AHP and a feed-forward neural network in the prediction of heart disease are evaluated in this paper. The accuracy of the proposed hybrid model is 83%, which is higher than that of some other existing models. In addition, the prediction time of the proposed hybrid model is also evaluated and has shown promising results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Amer, Ali A; Sewisy, Adel A; Elgendy, Taha M A
2017-12-01
With the substantial ever-upgrading advancement in data and information management field, Distributed Database System (DDBS) is still proven to be the most growingly-demanded tool to handle the accompanied constantly-piled volumes of data. However, the efficiency and adequacy of DDBS is profoundly correlated with the reliability and precision of the process in which DDBS is set to be designed. As for DDBS design, thus, several strategies have been developed, in literature, to be used in purpose of promoting DDBS performance. Off these strategies, data fragmentation, data allocation and replication, and sites clustering are the most immensely-used efficacious techniques that otherwise DDBS design and rendering would be prohibitively expensive. On one hand, an accurate well-architected data fragmentation and allocation is bound to incredibly increase data locality and promote the overall DDBS throughputs. On the other hand, finding a practical sites clustering process is set to contribute remarkably in reducing the overall Transmission Costs (TC). Consequently, consolidating all these strategies into one single work is going to undoubtedly satisfy a massive growth in DDBS influence. In this paper, therefore, an optimized heuristic horizontal fragmentation and allocation approach is meticulously developed. All the drawn-above strategies are elegantly combined into a single effective approach so as to an influential solution for DDBS productivity promotion is set to be markedly fulfilled. Most importantly, an internal and external evaluations are extensively illustrated. Obviously, findings of conducted experiments have maximally been recorded to be in favor of DDBS performance betterment.
Wang, Kaidong; Huang, Ke; Jiang, Guoqiang
2018-03-01
Acetaminophen is one kind of pharmaceutical contaminant that has been detected in municipal water and is hard to digest. A laccase-catalyzed oxidative coupling reaction is a potential method of removing acetaminophen from water. In the present study, the kinetics of radical polymerization combined with precipitation was studied, and the dual-pH optimization strategy (the enzyme solution at pH7.4 being added to the substrate solution at pH4.2) was proposed to enhance the removal efficiency of acetaminophen. The reaction kinetics that consisted of the laccase-catalyzed oxidation, radical polymerization and precipitation were studied by UV in situ, LC-MS and DLS (dynamic light scattering) in situ. The results showed that the laccase-catalyzed oxidation is the rate-limiting step in the whole process. The higher rate of enzyme-catalyzed oxidation under a dual-pH optimization strategy led to much faster formation of the dimer, trimer and tetramer. Similarly, the formation of polymerized products that could precipitate naturally from water was faster. Under the dual-pH optimization strategy, the initial laccase activity was increased approximately 2.9-fold, and the activity remained higher for >250s, during which approximately 63.7% of the total acetaminophen was transformed into biologically inactive polymerized products, and part of these polymerized products precipitated from the water. Laccase belongs to the family of multi-copper oxidases, and the present study provides a universal method to improve the activity of multi-copper oxidases for the high-performance removal of phenol and its derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimising reef-scale CO2 removal by seaweed to buffer ocean acidification
NASA Astrophysics Data System (ADS)
Mongin, Mathieu; Baird, Mark E.; Hadley, Scott; Lenton, Andrew
2016-03-01
The equilibration of rising atmospheric {{CO}}2 with the ocean is lowering {pH} in tropical waters by about 0.01 every decade. Coral reefs and the ecosystems they support are regarded as one of the most vulnerable ecosystems to ocean acidification, threatening their long-term viability. In response to this threat, different strategies for buffering the impact of ocean acidification have been proposed. As the {pH} experienced by individual corals on a natural reef system depends on many processes over different time scales, the efficacy of these buffering strategies remains largely unknown. Here we assess the feasibility and potential efficacy of a reef-scale (a few kilometers) carbon removal strategy, through the addition of seaweed (fleshy multicellular algae) farms within the Great Barrier Reef at the Heron Island reef. First, using diagnostic time-dependent age tracers in a hydrodynamic model, we determine the optimal location and size of the seaweed farm. Secondly, we analytically calculate the optimal density of the seaweed and harvesting strategy, finding, for the seaweed growth parameters used, a biomass of 42 g N m-2 with a harvesting rate of up 3.2 g N m-2 d-1 maximises the carbon sequestration and removal. Numerical experiments show that an optimally located 1.9 km2 farm and optimally harvested seaweed (removing biomass above 42 g N m-2 every 7 d) increased aragonite saturation by 0.1 over 24 km2 of the Heron Island reef. Thus, the most effective seaweed farm can only delay the impacts of global ocean acidification at the reef scale by 7-21 years, depending on future global carbon emissions. Our results highlight that only a kilometer-scale farm can partially mitigate global ocean acidification for a particular reef.
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Optimal Design of Grid-Stiffened Composite Panels Using Global and Local Buckling Analysis
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Jaunky, Navin; Knight, Norman F., Jr.
1996-01-01
A design strategy for optimal design of composite grid-stiffened panels subjected to global and local buckling constraints is developed using a discrete optimizer. An improved smeared stiffener theory is used for the global buckling analysis. Local buckling of skin segments is assessed using a Rayleigh-Ritz method that accounts for material anisotropy and transverse shear flexibility. The local buckling of stiffener segments is also assessed. Design variables are the axial and transverse stiffener spacing, stiffener height and thickness, skin laminate, and stiffening configuration. The design optimization process is adapted to identify the lightest-weight stiffening configuration and pattern for grid stiffened composite panels given the overall panel dimensions, design in-plane loads, material properties, and boundary conditions of the grid-stiffened panel.
Instructional Strategy: Didactic Media Presentation to Optimize Student Learning
ERIC Educational Resources Information Center
Schilling, Jim
2017-01-01
Context: Subject matter is presented to athletic training students in the classroom using various modes of media. The specific type of mode and when to use it should be considered to maximize learning effectiveness. Other factors to consider in this process include a student's knowledge base and the complexity of material. Objective: To introduce…
Automated Analysis of CT Images for the Inspection of Hardwood Logs
Harbin Li; A. Lynn Abbott; Daniel L. Schmoldt
1996-01-01
This paper investigates several classifiers for labeling internal features of hardwood logs using computed tomography (CT) images. A primary motivation is to locate and classify internal defects so that an optimal cutting strategy can be chosen. Previous work has relied on combinations of low-level processing, image segmentation, autoregressive texture modeling, and...
The Invitational Imagination for Theory, Research, and Practice.
ERIC Educational Resources Information Center
Novak, John M.
This paper argues that just as imagination has been important for the inception and promotion of invitational education, it is also necessary for the development of inviting research strategies. Applying the educative process to the study of inviting, recommendations are made for relating the constituent parts of the inviting stance (optimism,…
ERIC Educational Resources Information Center
Zwaigenbaum, Lonnie; Nicholas, David B.; Muskat, Barbara; Kilmer, Christopher; Newton, Amanda S.; Craig, William R.; Ratnapalan, Savithiri; Cohen-Silver, Justine; Greenblatt, Andrea; Roberts, Wendy; Sharon, Raphael
2016-01-01
This study aimed to characterize the perspectives of health professionals who care for children with autism spectrum disorder (ASD) in the emergency department (ED) and to determine what strategies could optimize care. Ten physicians and twelve nurses were interviewed individually. Questions related to experiences, processes, clinical…
An Optimal Parameter Discretization Strategy for Multiple Model Adaptive Estimation and Control
1989-12-01
Zicker . MMAE-Based Control with Space- Time Point Process Observations. IEEE Transactions on Aerospace and Elec- tronic Systems, AES-21 (3):292-300, 1985...Transactions of the Conference of Army Math- ematicians, Bethesda MD, 1982. (AD-POO1 033). 65. William L. Zicker . Pointing and Tracking of Particle
Educational Decentralization, Public Spending, and Social Justice in Nigeria
ERIC Educational Resources Information Center
Geo-Jaja, Macleans A.
2006-01-01
This study situates the process of educational decentralization in the narrower context of social justice. Its main object, however, is to analyze the implications of decentralization for strategies of equity and social justice in Nigeria. It starts from the premise that the early optimism that supported decentralization as an efficient and…
ERIC Educational Resources Information Center
Curtis, Russell C.
2000-01-01
Goal setting can be an effective way to help beginning counselors focus on important developmental issues. This article argues that counselors and supervisors must consider issues related to goal-setting theory and understand the process by which goals are set so that optimal learning experiences are created. (Author/MKA)
Identifying protein complexes based on brainstorming strategy.
Shen, Xianjun; Zhou, Jin; Yi, Li; Hu, Xiaohua; He, Tingting; Yang, Jincai
2016-11-01
Protein complexes comprising of interacting proteins in protein-protein interaction network (PPI network) play a central role in driving biological processes within cells. Recently, more and more swarm intelligence based algorithms to detect protein complexes have been emerging, which have become the research hotspot in proteomics field. In this paper, we propose a novel algorithm for identifying protein complexes based on brainstorming strategy (IPC-BSS), which is integrated into the main idea of swarm intelligence optimization and the improved K-means algorithm. Distance between the nodes in PPI network is defined by combining the network topology and gene ontology (GO) information. Inspired by human brainstorming process, IPC-BSS algorithm firstly selects the clustering center nodes, and then they are separately consolidated with the other nodes with short distance to form initial clusters. Finally, we put forward two ways of updating the initial clusters to search optimal results. Experimental results show that our IPC-BSS algorithm outperforms the other classic algorithms on yeast and human PPI networks, and it obtains many predicted protein complexes with biological significance. Copyright © 2016 Elsevier Inc. All rights reserved.
Optimized Production of Xylitol from Xylose Using a Hyper-Acidophilic Candida tropicalis.
Tamburini, Elena; Costa, Stefania; Marchetti, Maria Gabriella; Pedrini, Paola
2015-08-19
The yeast Candida tropicalis DSM 7524 produces xylitol, a natural, low-calorie sweetener, by fermentation of xylose. In order to increase xylitol production rate during the submerged fermentation process, some parameters-substrate (xylose) concentration, pH, aeration rate, temperature and fermentation strategy-have been optimized. The maximum xylitol yield reached at 60-80 g/L initial xylose concentration, pH 5.5 at 37 °C was 83.66% (w/w) on consumed xylose in microaerophilic conditions (kLa = 2·h(-1)). Scaling up on 3 L fermenter, with a fed-batch strategy, the best xylitol yield was 86.84% (w/w), against a 90% of theoretical yield. The hyper-acidophilic behaviour of C. tropicalis makes this strain particularly promising for industrial application, due to the possibility to work in non-sterile conditions.
Optimized Production of Xylitol from Xylose Using a Hyper-Acidophilic Candida tropicalis
Tamburini, Elena; Costa, Stefania; Marchetti, Maria Gabriella; Pedrini, Paola
2015-01-01
The yeast Candida tropicalis DSM 7524 produces xylitol, a natural, low-calorie sweetener, by fermentation of xylose. In order to increase xylitol production rate during the submerged fermentation process, some parameters-substrate (xylose) concentration, pH, aeration rate, temperature and fermentation strategy-have been optimized. The maximum xylitol yield reached at 60–80 g/L initial xylose concentration, pH 5.5 at 37 °C was 83.66% (w/w) on consumed xylose in microaerophilic conditions (kLa = 2·h−1). Scaling up on 3 L fermenter, with a fed-batch strategy, the best xylitol yield was 86.84% (w/w), against a 90% of theoretical yield. The hyper-acidophilic behaviour of C. tropicalis makes this strain particularly promising for industrial application, due to the possibility to work in non-sterile conditions. PMID:26295411
Xie, Zicong; Pang, Daxin; Wang, Kankan; Li, Mengjing; Guo, Nannan; Yuan, Hongming; Li, Jianing; Zou, Xiaodong; Jiao, Huping; Ouyang, Hongsheng; Li, Zhanjun; Tang, Xiaochun
2017-06-08
Genetically modified pigs have important roles in agriculture and biomedicine. However, genome-specific knock-in techniques in pigs are still in their infancy and optimal strategies have not been extensively investigated. In this study, we performed electroporation to introduce a targeting donor vector (a non-linearized vector that did not contain a promoter or selectable marker) into Porcine Foetal Fibroblasts (PFFs) along with a CRISPR/Cas9 vector. After optimization, the efficiency of the EGFP site-specific knock-in could reach up to 29.6% at the pRosa26 locus in PFFs. Next, we used the EGFP reporter PFFs to address two key conditions in the process of achieving transgenic pigs, the limiting dilution method and the strategy to evaluate the safety and feasibility of the knock-in locus. This study demonstrates that we establish an efficient procedures for the exogenous gene knock-in technique and creates a platform to efficiently generate promoter-less and selectable marker-free transgenic PFFs through the CRISPR/Cas9 system. This study should contribute to the generation of promoter-less and selectable marker-free transgenic pigs and it may provide insights into sophisticated site-specific genome engineering techniques for additional species.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
NASA Astrophysics Data System (ADS)
Cekli, Hakki Ergun; Nije, Jelle; Ypma, Alexander; Bastani, Vahid; Sonntag, Dag; Niesing, Henk; Zhang, Linmiao; Ullah, Zakir; Subramony, Venky; Somasundaram, Ravin; Susanto, William; Matsunobu, Masazumi; Johnson, Jeff; Tabery, Cyrus; Lin, Chenxi; Zou, Yi
2018-03-01
In addition to lithography process and equipment induced variations, processes like etching, annealing, film deposition and planarization exhibit variations, each having their own intrinsic characteristics and leaving an effect, a `fingerprint', on the wafers. With ever tighter requirements for CD and overlay, controlling these process induced variations is both increasingly important and increasingly challenging in advanced integrated circuit (IC) manufacturing. For example, the on-product overlay (OPO) requirement for future nodes is approaching <3nm, requiring the allowable budget for process induced variance to become extremely small. Process variance control is seen as an bottleneck to further shrink which drives the need for more sophisticated process control strategies. In this context we developed a novel `computational process control strategy' which provides the capability of proactive control of each individual wafer with aim to maximize the yield, without introducing a significant impact on metrology requirements, cycle time or productivity. The complexity of the wafer process is approached by characterizing the full wafer stack building a fingerprint library containing key patterning performance parameters like Overlay, Focus, etc. Historical wafer metrology is decomposed into dominant fingerprints using Principal Component Analysis. By associating observed fingerprints with their origin e.g. process steps, tools and variables, we can give an inline assessment of the strength and origin of the fingerprints on every wafer. Once the fingerprint library is established, a wafer specific fingerprint correction recipes can be determined based on its processing history. Data science techniques are used in real-time to ensure that the library is adaptive. To realize this concept, ASML TWINSCAN scanners play a vital role with their on-board full wafer detection and exposure correction capabilities. High density metrology data is created by the scanner for each wafer and on every layer during the lithography steps. This metrology data will be used to obtain the process fingerprints. Also, the per exposure and per wafer correction potential of the scanners will be utilized for improved patterning control. Additionally, the fingerprint library will provide early detection of excursions for inline root cause analysis and process optimization guidance.
Parallel processing optimization strategy based on MapReduce model in cloud storage environment
NASA Astrophysics Data System (ADS)
Cui, Jianming; Liu, Jiayi; Li, Qiuyan
2017-05-01
Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
Barlow, P.M.; Wagner, B.J.; Belitz, K.
1996-01-01
The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
NASA Astrophysics Data System (ADS)
Braun, Robert Joseph
The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.
Blood management issues using blood management strategies.
Stulberg, Bernard N; Zadzilka, Jayson D
2007-06-01
Blood management strategies is a term used to address a coordinated approach to the management of blood loss in the perioperative period for total joint arthroplasty. The premise of any blood management strategy is that each patient, surgeon, and operative intervention experiences different risks of requiring transfusion, that those risks can be identified, and that a plan can be implemented to address them. A surgeon's decision to transfuse should be based on physiologic assessment of the patient's response to anemia and not on an arbitrary number ("transfusion trigger"). Intervention strategies can be applied preoperatively, intraoperatively, and postoperatively. Patient-specific planning allows for the appropriate use of patient, hospital, and system resources, ensuring that the consequences of anemia are minimized and that the patient's recovery process is optimized.
NASA Astrophysics Data System (ADS)
Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay
2015-01-01
In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.
Dispositional optimism and coping strategies in patients with a kidney transplant.
Costa-Requena, Gemma; Cantarell-Aixendri, M Carmen; Parramon-Puig, Gemma; Serón-Micas, Daniel
2014-01-01
Dispositional optimism is a personal resource that determines the coping style and adaptive response to chronic diseases. The aim of this study was to assess the correlations between dispositional optimism and coping strategies in patients with recent kidney transplantation and evaluate the differences in the use of coping strategies in accordance with the level of dispositional optimism. Patients who were hospitalised in the nephrology department were selected consecutively after kidney transplantation was performed. The evaluation instruments were the Life Orientation Test-Revised, and the Coping Strategies Inventory. The data were analysed with central tendency measures, correlation analyses and means were compared using Student’s t-test. 66 patients with a kidney transplant participated in the study. The coping styles that characterised patients with a recent kidney transplantation were Social withdrawal and Problem avoidance. Correlations between dispositional optimism and coping strategies were significant in a positive direction in Problem-solving (p<.05) and Cognitive restructuring (p<.01), and inversely with Self-criticism (p<.05). Differences in dispositional optimism created significant differences in the Self-Criticism dimension (t=2.58; p<.01). Dispositional optimism scores provide differences in coping responses after kidney transplantation. Moreover, coping strategies may influence the patient’s perception of emotional wellbeing after kidney transplantation.
Vector-borne disease intelligence: strategies to deal with disease burden and threats.
Braks, Marieta; Medlock, Jolyon M; Hubalek, Zdenek; Hjertqvist, Marika; Perrin, Yvon; Lancelot, Renaud; Duchyene, Els; Hendrickx, Guy; Stroo, Arjan; Heyman, Paul; Sprong, Hein
2014-01-01
Owing to the complex nature of vector-borne diseases (VBDs), whereby monitoring of human case patients does not suffice, public health authorities experience challenges in surveillance and control of VBDs. Knowledge on the presence and distribution of vectors and the pathogens that they transmit is vital to the risk assessment process to permit effective early warning, surveillance, and control of VBDs. Upon accepting this reality, public health authorities face an ever-increasing range of possible surveillance targets and an associated prioritization process. Here, we propose a comprehensive approach that integrates three surveillance strategies: population-based surveillance, disease-based surveillance, and context-based surveillance for EU member states to tailor the best surveillance strategy for control of VBDs in their geographic region. By classifying the surveillance structure into five different contexts, we hope to provide guidance in optimizing surveillance efforts. Contextual surveillance strategies for VBDs entail combining organization and data collection approaches that result in disease intelligence rather than a preset static structure.
Vector-Borne Disease Intelligence: Strategies to Deal with Disease Burden and Threats
Braks, Marieta; Medlock, Jolyon M.; Hubalek, Zdenek; Hjertqvist, Marika; Perrin, Yvon; Lancelot, Renaud; Duchyene, Els; Hendrickx, Guy; Stroo, Arjan; Heyman, Paul; Sprong, Hein
2014-01-01
Owing to the complex nature of vector-borne diseases (VBDs), whereby monitoring of human case patients does not suffice, public health authorities experience challenges in surveillance and control of VBDs. Knowledge on the presence and distribution of vectors and the pathogens that they transmit is vital to the risk assessment process to permit effective early warning, surveillance, and control of VBDs. Upon accepting this reality, public health authorities face an ever-increasing range of possible surveillance targets and an associated prioritization process. Here, we propose a comprehensive approach that integrates three surveillance strategies: population-based surveillance, disease-based surveillance, and context-based surveillance for EU member states to tailor the best surveillance strategy for control of VBDs in their geographic region. By classifying the surveillance structure into five different contexts, we hope to provide guidance in optimizing surveillance efforts. Contextual surveillance strategies for VBDs entail combining organization and data collection approaches that result in disease intelligence rather than a preset static structure. PMID:25566522
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
The importance of functional form in optimal control solutions of problems in population dynamics
Runge, M.C.; Johnson, F.A.
2002-01-01
Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood profile may end up producing alternatives that do not differ as importantly as if different functional forms had been used. We recommend that biological knowledge be used to bracket a range of possible functional forms, and robustness of conclusions be checked over this range.
Simple Example of Backtest Overfitting (SEBO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less
NASA Technical Reports Server (NTRS)
Rarig, P. L.
1980-01-01
A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.
Chen, Yantian; Bloemen, Veerle; Impens, Saartje; Moesen, Maarten; Luyten, Frank P; Schrooten, Jan
2011-12-01
Cell seeding into scaffolds plays a crucial role in the development of efficient bone tissue engineering constructs. Hence, it becomes imperative to identify the key factors that quantitatively predict reproducible and efficient seeding protocols. In this study, the optimization of a cell seeding process was investigated using design of experiments (DOE) statistical methods. Five seeding factors (cell type, scaffold type, seeding volume, seeding density, and seeding time) were selected and investigated by means of two response parameters, critically related to the cell seeding process: cell seeding efficiency (CSE) and cell-specific viability (CSV). In addition, cell spatial distribution (CSD) was analyzed by Live/Dead staining assays. Analysis identified a number of statistically significant main factor effects and interactions. Among the five seeding factors, only seeding volume and seeding time significantly affected CSE and CSV. Also, cell and scaffold type were involved in the interactions with other seeding factors. Within the investigated ranges, optimal conditions in terms of CSV and CSD were obtained when seeding cells in a regular scaffold with an excess of medium. The results of this case study contribute to a better understanding and definition of optimal process parameters for cell seeding. A DOE strategy can identify and optimize critical process variables to reduce the variability and assists in determining which variables should be carefully controlled during good manufacturing practice production to enable a clinically relevant implant.
An adaptive sharing elitist evolution strategy for multiobjective optimization.
Costa, Lino; Oliveira, Pedro
2003-01-01
Almost all approaches to multiobjective optimization are based on Genetic Algorithms (GAs), and implementations based on Evolution Strategies (ESs) are very rare. Thus, it is crucial to investigate how ESs can be extended to multiobjective optimization, since they have, in the past, proven to be powerful single objective optimizers. In this paper, we present a new approach to multiobjective optimization, based on ESs. We call this approach the Multiobjective Elitist Evolution Strategy (MEES) as it incorporates several mechanisms, like elitism, that improve its performance. When compared with other algorithms, MEES shows very promising results in terms of performance.
Optimization and resilience in natural resources management
Williams, Byron K.; Johnson, Fred A.
2015-01-01
We consider the putative tradeoff between optimization and resilience in the management of natural resources, using a framework that incorporates different sources of uncertainty that are common in natural resources management. We address one-time decisions, and then expand the decision context to the more complex problem of iterative decision making. For both cases we focus on two key sources of uncertainty: partial observability of system state and uncertainty as to system dynamics. Optimal management strategies will vary considerably depending on the timeframe being considered and the amount and quality of information that is available to characterize system features and project the consequences of potential decisions. But in all cases an optimal decision making framework, if properly identified and focused, can be useful in recognizing sound decisions. We argue that under the conditions of deep uncertainty that characterize many resource systems, an optimal decision process that focuses on robustness does not automatically induce a loss of resilience.
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
Optimization model of vaccination strategy for dengue transmission
NASA Astrophysics Data System (ADS)
Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.
2014-02-01
Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.
Optimal resource allocation strategy for two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu
2018-02-01
We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.
Bio-Mimic Optimization Strategies in Wireless Sensor Networks: A Survey
Adnan, Md. Akhtaruzzaman; Razzaque, Mohammd Abdur; Ahmed, Ishtiaque; Isnin, Ismail Fauzi
2014-01-01
For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted. PMID:24368702
Energy-efficient hierarchical processing in the network of wireless intelligent sensors (WISE)
NASA Astrophysics Data System (ADS)
Raskovic, Dejan
Sensor network nodes have benefited from technological advances in the field of wireless communication, processing, and power sources. However, the processing power of microcontrollers is often not sufficient to perform sophisticated processing, while the power requirements of digital signal processing boards or handheld computers are usually too demanding for prolonged system use. We are matching the intrinsic hierarchical nature of many digital signal-processing applications with the natural hierarchy in distributed wireless networks, and building the hierarchical system of wireless intelligent sensors. Our goal is to build a system that will exploit the hierarchical organization to optimize the power consumption and extend battery life for the given time and memory constraints, while providing real-time processing of sensor signals. In addition, we are designing our system to be able to adapt to the current state of the environment, by dynamically changing the algorithm through procedure replacement. This dissertation presents the analysis of hierarchical environment and methods for energy profiling used to evaluate different system design strategies, and to optimize time-effective and energy-efficient processing.
Williams, Perry J.; Kendall, William L.
2017-01-01
Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.
Zhang, Xiang; Dunlow, Ryan; Blackman, Burchelle N; Swenson, Rolf E
2018-05-15
Traditional radiosynthetic optimization faces the challenges of high radiation exposure, cost, and inability to perform serial reactions due to tracer decay. To accelerate tracer development, we have developed a strategy to simulate radioactive 18 F-syntheses by using tracer-level (nanomolar) non-radioactive 19 F-reagents and LC-MS/MS analysis. The methodology was validated with fallypride synthesis under tracer-level 19 F-conditions, which showed reproducible and comparable results with radiosynthesis, and proved the feasibility of this process. Using this approach, the synthesis of [ 18 F]MDL100907 was optimized under 19 F-conditions with greatly improved yield. The best conditions were successfully transferred to radiosynthesis. A radiochemical yield of 19% to 22% was achieved with the radiochemical purity >99% and the molar activity 38.8 to 53.6 GBq/ μmol (n = 3). The tracer-level 19 F-approach provides a high-throughput and cost-effective process to optimize radiosynthesis with reduced radiation exposure. This new method allows medicinal and synthetic chemists to optimize radiolabeling conditions without the need to use radioactivity. Copyright © 2018 John Wiley & Sons, Ltd.
Efficient community-based control strategies in adaptive networks
NASA Astrophysics Data System (ADS)
Yang, Hui; Tang, Ming; Zhang, Hai-Feng
2012-12-01
Most studies on adaptive networks concentrate on the properties of steady state, but neglect transient dynamics. In this study, we pay attention to the emergence of community structure in the transient process and the effects of community-based control strategies on epidemic spreading. First, by normalizing the modularity, we investigate the evolution of community structure during the transient process, and find that a strong community structure is induced by the rewiring mechanism in the early stage of epidemic dynamics, which, remarkably, delays the outbreak of disease. We then study the effects of control strategies started at different stages on the prevalence. Both immunization and quarantine strategies indicate that it is not ‘the earlier, the better’ for the implementation of control measures. And the optimal control effect is obtained if control measures can be efficiently implemented in the period of a strong community structure. For the immunization strategy, immunizing the susceptible nodes on susceptible-infected links and immunizing susceptible nodes randomly have similar control effects. However, for the quarantine strategy, quarantining the infected nodes on susceptible-infected links can yield a far better result than quarantining infected nodes randomly. More significantly, the community-based quarantine strategy performs better than the community-based immunization strategy. This study may shed new light on the forecast and the prevention of epidemics among humans.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Intelligent fault recognition strategy based on adaptive optimized multiple centers
NASA Astrophysics Data System (ADS)
Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong
2018-06-01
For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.
Strategies for Primary Care Stakeholders to Improve Electronic Health Records (EHRs).
Olayiwola, J Nwando; Rubin, Ashley; Slomoff, Theo; Woldeyesus, Tem; Willard-Grace, Rachel
2016-01-01
The use of electronic health records (EHRs) and the vendors that develop them have increased exponentially in recent years. While there continues to emerge literature on the challenges EHRs have created related to primary care provider satisfaction and workflow, there is sparse literature on the perspective of the EHR vendors themselves. We examined the role of EHR vendors in optimizing primary care practice through a qualitative study of vendor leadership and developers representing 8 companies. We found that EHR vendors apply a range of strategies to elicit feedback from their clinical users and to engage selected users in their development and design process, but priorities are heavily influenced by the macroenvironment and government regulations. To improve the "marriage" between primary care and the EHR vendor community, we propose 6 strategies that may be most impactful for primary care stakeholders seeking to influence EHR development processes. © Copyright 2016 by the American Board of Family Medicine.
A Class of Solvable Stopping Games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, Luis H. R.
We consider a class of Dynkin games in the case where the underlying process evolves according to a one-dimensional but otherwise general diffusion. We establish general conditions under which both the value and the saddle point equilibrium exist and under which the exercise boundaries characterizing the saddle point strategy can be explicitly characterized in terms of a pair of standard first order necessary conditions for optimality. We also analyze those cases where an extremal pair of boundaries exists and investigate the overall impact of increased volatility on the equilibrium stopping strategies and their values.
Telange, Darshan R; Patil, Arun T; Pethe, Anil M; Fegade, Harshal; Anand, Sridhar; Dave, Vivek S
2017-10-15
The apigenin-phospholipid phytosome (APLC) was developed to improve the aqueous solubility, dissolution, in vivo bioavailability, and antioxidant activity of apigenin. The APLC synthesis was guided by a full factorial design strategy, incorporating specific formulation and process variables to deliver an optimized product. The design-optimized formulation was assayed for aqueous solubility, in vitro dissolution, pharmacokinetics, and antioxidant activity. The pharmacological evaluation was carried out by assessing its effects on carbon tetrachloride-induced elevation of liver function marker enzymes in a rat model. The antioxidant activity was assessed by studying its effects on the liver antioxidant marker enzymes. The developed model was validated using the design-optimized levels of formulation and process variables. The physical-chemical characterization confirmed the formation of phytosomes. The optimized formulation demonstrated over 36-fold higher aqueous solubility of apigenin, compared to that of pure apigenin. The formulation also exhibited a significantly higher rate and extent of apigenin release in dissolution studies. The pharmacokinetic analysis revealed a significant enhancement in the oral bioavailability of apigenin from the prepared formulation, compared to pure apigenin. The liver function tests indicated that the prepared phytosome showed a significantly improved restoration of all carbon tetrachloride-elevated rat liver function marker enzymes. The prepared formulation also exhibited antioxidant potential by significantly increasing the levels of glutathione, superoxide dismutase, catalase, and decreasing the levels of lipid peroxidase. The study shows that phospholipid-based phytosome is a promising and viable strategy for improving the delivery of apigenin and similar phytoconstituents with low aqueous solubility. Copyright © 2016 Elsevier B.V. All rights reserved.
A particle swarm optimization variant with an inner variable learning strategy.
Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin
2014-01-01
Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
Library design practices for success in lead generation with small molecule libraries.
Goodnow, R A; Guba, W; Haap, W
2003-11-01
The generation of novel structures amenable to rapid and efficient lead optimization comprises an emerging strategy for success in modern drug discovery. Small molecule libraries of sufficient size and diversity to increase the chances of discovery of novel structures make the high throughput synthesis approach the method of choice for lead generation. Despite an industry trend for smaller, more focused libraries, the need to generate novel lead structures makes larger libraries a necessary strategy. For libraries of a several thousand or more members, solid phase synthesis approaches are the most suitable. While the technology and chemistry necessary for small molecule library synthesis continue to advance, success in lead generation requires rigorous consideration in the library design process to ensure the synthesis of molecules possessing the proper characteristics for subsequent lead optimization. Without proper selection of library templates and building blocks, solid phase synthesis methods often generate molecules which are too heavy, too lipophilic and too complex to be useful for lead optimization. The appropriate filtering of virtual library designs with multiple computational tools allows the generation of information-rich libraries within a drug-like molecular property space. An understanding of the hit-to-lead process provides a practical guide to molecular design characteristics. Examples of leads generated from library approaches also provide a benchmarking of successes as well as aspects for continued development of library design practices.
Multidisciplinary Design Optimization of a Full Vehicle with High Performance Computing
NASA Technical Reports Server (NTRS)
Yang, R. J.; Gu, L.; Tho, C. H.; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
Multidisciplinary design optimization (MDO) of a full vehicle under the constraints of crashworthiness, NVH (Noise, Vibration and Harshness), durability, and other performance attributes is one of the imperative goals for automotive industry. However, it is often infeasible due to the lack of computational resources, robust simulation capabilities, and efficient optimization methodologies. This paper intends to move closer towards that goal by using parallel computers for the intensive computation and combining different approximations for dissimilar analyses in the MDO process. The MDO process presented in this paper is an extension of the previous work reported by Sobieski et al. In addition to the roof crush, two full vehicle crash modes are added: full frontal impact and 50% frontal offset crash. Instead of using an adaptive polynomial response surface method, this paper employs a DOE/RSM method for exploring the design space and constructing highly nonlinear crash functions. Two NMO strategies are used and results are compared. This paper demonstrates that with high performance computing, a conventionally intractable real world full vehicle multidisciplinary optimization problem considering all performance attributes with large number of design variables become feasible.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Towards inverse modeling of turbidity currents: The inverse lock-exchange problem
NASA Astrophysics Data System (ADS)
Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison
2011-04-01
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.
Optimized adipose tissue engineering strategy based on a neo-mechanical processing method.
He, Yunfan; Lin, Maohui; Wang, Xuecen; Guan, Jingyan; Dong, Ziqing; Feng, Lu; Xing, Malcolm; Feng, Chuanbo; Li, Xiaojian
2018-05-26
Decellularized adipose tissue (DAT) represents a promising scaffold for adipose tissue engineering. However, the unique and prolonged lipid removal process required for adipose tissue can damage extracellular matrix (ECM) constituents. Moreover, inadequate vascularization limits the recellularization of DAT in vivo. We proposed a neo-mechanical protocol for rapidly breaking adipocytes and removing lipid content from adipose tissue. The lipid-depleted adipose tissue was then subjected to a fast and mild decellularization to fabricate high-quality DAT (M-DAT). Adipose liquid extract (ALE) derived from this mechanical process was collected and incorporated into M-DAT to further optimize in vivo recellularization. Ordinary DAT was fabricated and served as a control. This developed strategy was evaluated based on decellularization efficiency, ECM quality, and recellularization efficiency. Angiogenic factor components and angiogenic potential of ALE were evaluated in vivo and in vitro. M-DAT achieved the same decellularization efficiency, but exhibited better retention of ECM components and recellularization, compared to those with ordinary DAT. Protein quantification revealed considerable levels of angiogenic factors (basic fibroblast growth factor, epidermal growth factor, transforming growth factor-β1, and vascular endothelial growth factor) in ALE. ALE promoted tube formation in vitro and induced intense angiogenesis in M-DAT in vivo; furthermore, higher expression of the adipogenic factor PPARγ and greater numbers of adipocytes were evident following ALE treatment, compared to those in the M-DAT group. Mechanical processing of adipose tissue led to the production of high-quality M-DAT and angiogenic factor-enriched ALE. The combination of ALE and M-DAT could be a promising strategy for engineered adipose tissue construction. This article is protected by copyright. All rights reserved. © 2018 by the Wound Healing Society.
Demystifying process mapping: a key step in neurosurgical quality improvement initiatives.
McLaughlin, Nancy; Rodstein, Jennifer; Burke, Michael A; Martin, Neil A
2014-08-01
Reliable delivery of optimal care can be challenging for care providers. Health care leaders have integrated various business tools to assist them and their teams in ensuring consistent delivery of safe and top-quality care. The cornerstone to all quality improvement strategies is the detailed understanding of the current state of a process, captured by process mapping. Process mapping empowers caregivers to audit how they are currently delivering care to subsequently strategically plan improvement initiatives. As a community, neurosurgery has clearly shown dedication to enhancing patient safety and delivering quality care. A care redesign strategy named NERVS (Neurosurgery Enhanced Recovery after surgery, Value, and Safety) is currently being developed and piloted within our department. Through this initiative, a multidisciplinary team led by a clinician neurosurgeon has process mapped the way care is currently being delivered throughout the entire episode of care. Neurosurgeons are becoming leaders in quality programs, and their education on the quality improvement strategies and tools is essential. The authors present a comprehensive review of process mapping, demystifying its planning, its building, and its analysis. The particularities of using process maps, initially a business tool, in the health care arena are discussed, and their specific use in an academic neurosurgical department is presented.
Quintero, Catherine; Kariv, Ilona
2009-06-01
To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.
NASA Astrophysics Data System (ADS)
Langan, John
1996-10-01
The predominance of multi-level metalization schemes in advanced integrated circuit manufacturing has greatly increased the importance of plasma enhanced chemical vapor deposition (PECVD) and in turn in-situ plasma chamber cleaning. In order to maintain the highest throughput for these processes the clean step must be as short as possible. In addition, there is an increasing desire to minimize the fluorinated gas usage during the clean, while maximizing its efficiency, not only to achieve lower costs, but also because many of the gases used in this process are global warming compounds. We have studied the fundamental properties of discharges of NF_3, CF_4, and C_2F6 under conditions relevant to chamber cleaning in the GEC rf reference cell. Using electrical impedance analysis and optical emission spectroscopy we have determined that the electronegative nature of these discharges defines the optimal processing conditions by controlling the power coupling efficiency and mechanisms of power dissipation in the discharge. Examples will be presented where strategies identified by these studies have been used to optimize actual manufacturing chamber clean processes. (This work was performed in collaboration with Mark Sobolewski, National Institute of Standards and Technology, and Brian Felker, Air Products and Chemicals, Inc.)
Calibration and simulation of two large wastewater treatment plants operated for nutrient removal.
Ferrer, J; Morenilla, J J; Bouzas, A; García-Usach, F
2004-01-01
Control and optimisation of plant processes has become a priority for WWTP managers. The calibration and verification of a mathematical model provides an important tool for the investigation of advanced control strategies that may assist in the design or optimization of WWTPs. This paper describes the calibration of the ASM2d model for two full scale biological nitrogen and phosphorus removal plants in order to characterize the biological process and to upgrade the plants' performance. Results from simulation showed a good correspondence with experimental data demonstrating that the model and the calibrated parameters were able to predict the behaviour of both WWTPs. Once the calibration and simulation process was finished, a study for each WWTP was done with the aim of improving its performance. Modifications focused on reactor configuration and operation strategies were proposed.
Symbiosis-Based Alternative Learning Multi-Swarm Particle Swarm Optimization.
Niu, Ben; Huang, Huali; Tan, Lijing; Duan, Qiqi
2017-01-01
Inspired by the ideas from the mutual cooperation of symbiosis in natural ecosystem, this paper proposes a new variant of PSO, named Symbiosis-based Alternative Learning Multi-swarm Particle Swarm Optimization (SALMPSO). A learning probability to select one exemplar out of the center positions, the local best position, and the historical best position including the experience of internal and external multiple swarms, is used to keep the diversity of the population. Two different levels of social interaction within and between multiple swarms are proposed. In the search process, particles not only exchange social experience with others that are from their own sub-swarms, but also are influenced by the experience of particles from other fellow sub-swarms. According to the different exemplars and learning strategy, this model is instantiated as four variants of SALMPSO and a set of 15 test functions are conducted to compare with some variants of PSO including 10, 30 and 50 dimensions, respectively. Experimental results demonstrate that the alternative learning strategy in each SALMPSO version can exhibit better performance in terms of the convergence speed and optimal values on most multimodal functions in our simulation.
Resolving structural uncertainty in natural resources management using POMDP approaches
Williams, B.K.
2011-01-01
In recent years there has been a growing focus on the uncertainties of natural resources management, and the importance of accounting for uncertainty in assessing management effectiveness. This paper focuses on uncertainty in resource management in terms of discrete-state Markov decision processes (MDP) under structural uncertainty and partial observability. It describes the treatment of structural uncertainty with approaches developed for partially observable resource systems. In particular, I show how value iteration for partially observable MDPs (POMDP) can be extended to structurally uncertain MDPs. A key difference between these process classes is that structurally uncertain MDPs require the tracking of system state as well as a probability structure for the structure uncertainty, whereas with POMDPs require only a probability structure for the observation uncertainty. The added complexity of the optimization problem under structural uncertainty is compensated by reduced dimensionality in the search for optimal strategy. A solution algorithm for structurally uncertain processes is outlined for a simple example in conservation biology. By building on the conceptual framework developed for POMDPs, natural resource analysts and decision makers who confront structural uncertainties in natural resources can take advantage of the rapid growth in POMDP methods and approaches, and thereby produce better conservation strategies over a larger class of resource problems. ?? 2011.
Biological enhancement of graft-tunnel healing in anterior cruciate ligament reconstruction
SACCOMANNO, MARISTELLA F.; CAPASSO, LUIGI; FRESTA, LUCA; MILANO, GIUSEPPE
2016-01-01
The sites where graft healing occurs within the bone tunnel and where the intra-articular ligamentization process takes place are the two most important sites of biological incorporation after anterior cruciate ligament (ACL) reconstruction, since they help to determine the mechanical behavior of the femur-ACL graft-tibia complex. Graft-tunnel healing is a complex process influenced by several factors, such as type of graft, preservation of remnants, bone quality, tunnel length and placement, fixation techniques and mechanical stress. In recent years, numerous experimental and clinical studies have been carried out to evaluate potential strategies designed to enhance and optimize the biological environment of the graft-tunnel interface. Modulation of inflammation, tissue engineering and gene transfer techniques have been applied in order to obtain a direct-type fibrocartilaginous insertion of the ACL graft, similar to that of native ligament, and to accelerate the healing process of tendon grafts within the bone tunnel. Although animal studies have given encouraging results, clinical studies are lacking and their results do not really support the use of the various strategies in clinical practice. Further investigations are therefore needed to optimize delivery techniques, therapeutic concentrations, maintenance of therapeutic effects over time, and to reduce the risk of undesirable effects in clinical practice. PMID:27900311
Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne
2010-04-01
Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.
Large deviations and portfolio optimization
NASA Astrophysics Data System (ADS)
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Data analytics and optimization of an ice-based energy storage system for commercial buildings
Luo, Na; Hong, Tianzhen; Li, Hui; ...
2017-07-25
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Data analytics and optimization of an ice-based energy storage system for commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Na; Hong, Tianzhen; Li, Hui
Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less
Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei
2017-01-01
Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.
Ji, Julie L; Holmes, Emily A; Blackwell, Simon E
2017-01-01
Optimism is associated with positive outcomes across many health domains, from cardiovascular disease to depression. However, we know little about cognitive processes underlying optimism in psychopathology. The present study tested whether the ability to vividly imagine positive events in one's future was associated with dispositional optimism in a sample of depressed adults. Cross-sectional and longitudinal analyses were conducted, using baseline (all participants, N=150) and follow-up data (participants in the control condition only, N=63) from a clinical trial (Blackwell et al., 2015). Vividness of positive prospective imagery, assessed on a laboratory-administered task at baseline, was significantly associated with both current optimism levels at baseline and future (seven months later) optimism levels, including when controlling for potential confounds. Even when depressed, those individuals able to envision a brighter future were more optimistic, and regained optimism more quickly over time, than those less able to do so at baseline. Strategies to increase the vividness of positive prospective imagery may aid development of mental health interventions to boost optimism. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Cerebellar Deep Nuclei Involvement in Cognitive Adaptation and Automaticity
ERIC Educational Resources Information Center
Callu, Delphine; Lopez, Joelle; El Massioui, Nicole
2013-01-01
To determine the role of the interpositus nuclei of cerebellum in rule-based learning and optimization processes, we studied (1) successive transfers of an initially acquired response rule in a cross maze and (2) behavioral strategies in learning a simple response rule in a T maze in interpositus lesioned rats (neurotoxic or electrolytic lesions).…
NASA Astrophysics Data System (ADS)
Yang, Peizhi; Tang, Qunwei; Ji, Chenming; Wang, Haobo
2015-12-01
Pursuit of an efficient strategy for quantum dot-sensitized photoanode has been a persistent objective for enhancing photovoltaic performances of quantum dot-sensitized solar cell (QDSC). We present here the fabrication of the indium sulfide (In2S3) quantum dot-sensitized titanium dioxide (TiO2) photoanode by combining successive ionic layer adsorption and reaction (SILAR) with solvothermal processes. The resultant QDSC consists of an In2S3 sensitized TiO2 photoanode, a liquid polysulfide electrolyte, and a Co0.85Se counter electrode. The optimized QDSC with photoanode prepared with the help of a SILAR method at 20 deposition cycles and solvothermal method yields a maximum power conversion efficiency of 1.39%.
Designing cost effective water demand management programs in Australia.
White, S B; Fane, S A
2002-01-01
This paper describes recent experience with integrated resource planning (IRP) and the application of least cost planning (LCP) for the evaluation of demand management strategies in urban water. Two Australian case studies, Sydney and Northern New South Wales (NSW) are used in illustration. LCP can determine the most cost effective means of providing water services or alternatively the cheapest forms of water conservation. LCP contrasts to a traditional approach of evaluation which looks only at means of increasing supply. Detailed investigation of water usage, known as end-use analysis, is required for LCP. End-use analysis allows both rigorous demand forecasting, and the development and evaluation of conservation strategies. Strategies include education campaigns, increasing water use efficiency and promoting wastewater reuse or rainwater tanks. The optimal mix of conservation strategies and conventional capacity expansion is identified based on levelised unit cost. IRP uses LCP in the iterative process, evaluating and assessing options, investing in selected options, measuring the results, and then re-evaluating options. Key to this process is the design of cost effective demand management programs. IRP however includes a range of parameters beyond least economic cost in the planning process and program designs, including uncertainty, benefit partitioning and implementation considerations.
NASA Astrophysics Data System (ADS)
Asoodeh, Mojtaba; Bagheripour, Parisa; Gholami, Amin
2015-06-01
Free fluid porosity and rock permeability, undoubtedly the most critical parameters of hydrocarbon reservoir, could be obtained by processing of nuclear magnetic resonance (NMR) log. Despite conventional well logs (CWLs), NMR logging is very expensive and time-consuming. Therefore, idea of synthesizing NMR log from CWLs would be of a great appeal among reservoir engineers. For this purpose, three optimization strategies are followed. Firstly, artificial neural network (ANN) is optimized by virtue of hybrid genetic algorithm-pattern search (GA-PS) technique, then fuzzy logic (FL) is optimized by means of GA-PS, and eventually an alternative condition expectation (ACE) model is constructed using the concept of committee machine to combine outputs of optimized and non-optimized FL and ANN models. Results indicated that optimization of traditional ANN and FL model using GA-PS technique significantly enhances their performances. Furthermore, the ACE committee of aforementioned models produces more accurate and reliable results compared with a singular model performing alone.
Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing
2015-01-01
An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Optimal strategy for controlling the spread of Plasmodium Knowlesi malaria: Treatment and culling
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Hasan, Yahya Abu; Abdullah, Farah Aini
2015-05-01
Plasmodium Knowlesi malaria is a parasitic mosquito-borne disease caused by a eukaryotic protist of genus Plasmodium Knowlesi transmitted by mosquito, Anopheles leucosphyrus to human and macaques. We developed and analyzed a deterministic Mathematical model for the transmission of Plasmodium Knowlesi malaria in human and macaques. The optimal control theory is applied to investigate optimal strategies for controlling the spread of Plasmodium Knowlesi malaria using treatment and culling as control strategies. The conditions for optimal control of the Plasmodium Knowlesi malaria are derived using Pontryagin's Maximum Principle. Finally, numerical simulations suggested that the combination of the control strategies is the best way to control the disease in any community.
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch.
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
Enzymatic process optimization for the in vitro production of isoprene from mevalonate.
Cheng, Tao; Liu, Hui; Zou, Huibin; Chen, Ningning; Shi, Mengxun; Xie, Congxia; Zhao, Guang; Xian, Mo
2017-01-09
As an important bulk chemical for synthetic rubber, isoprene can be biosynthesized by robust microbes. But rational engineering and optimization are often demanded to make the in vivo process feasible due to the complexities of cellular metabolism. Alternative synthetic biochemistry strategies are in fast development to produce isoprene or isoprenoids in vitro. This study set up an in vitro enzyme synthetic chemistry process using 5 enzymes in the lower mevalonate pathway to produce isoprene from mevalonate. We found the level and ratio of individual enzymes would significantly affect the efficiency of the whole system. The optimized process using 10 balanced enzyme unites (5.0 µM of MVK, PMK, MVD; 10.0 µM of IDI, 80.0 µM of ISPS) could produce 6323.5 µmol/L/h (430 mg/L/h) isoprene in a 2 ml in vitro system. In a scale up process (50 ml) only using 1 balanced enzyme unit (0.5 µM of MVK, PMK, MVD; 1.0 µM of IDI, 8.0 µM of ISPS), the system could produce 302 mg/L isoprene in 40 h, which showed higher production rate and longer reaction phase with comparison of the in vivo control. By optimizing the enzyme levels of lower MVA pathway, synthetic biochemistry methods could be set up for the enzymatic production of isoprene or isoprenoids from mevalonate.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Ishihara, Tsukasa; Koga, Yuji; Iwatsuki, Yoshiyuki; Hirayama, Fukushi
2015-01-15
Anticoagulant agents have emerged as a promising class of therapeutic drugs for the treatment and prevention of arterial and venous thrombosis. We investigated a series of novel orally active factor Xa inhibitors designed using our previously reported conjugation strategy to boost oral anticoagulant effect. Structural optimization of anthranilamide derivative 3 as a lead compound with installation of phenolic hydroxyl group and extensive exploration of the P1 binding element led to the identification of 5-chloro-N-(5-chloro-2-pyridyl)-3-hydroxy-2-{[4-(4-methyl-1,4-diazepan-1-yl)benzoyl]amino}benzamide (33, AS1468240) as a potent factor Xa inhibitor with significant oral anticoagulant activity. We also reported a newly developed Free-Wilson-like fragment recommender system based on the integration of R-group decomposition with collaborative filtering for the structural optimization process. Copyright © 2014 Elsevier Ltd. All rights reserved.
In Silico Constraint-Based Strain Optimization Methods: the Quest for Optimal Cell Factories
Maia, Paulo; Rocha, Miguel
2015-01-01
SUMMARY Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed. PMID:26609052
High-throughput strategies for the discovery and engineering of enzymes for biocatalysis.
Jacques, Philippe; Béchet, Max; Bigan, Muriel; Caly, Delphine; Chataigné, Gabrielle; Coutte, François; Flahaut, Christophe; Heuson, Egon; Leclère, Valérie; Lecouturier, Didier; Phalip, Vincent; Ravallec, Rozenn; Dhulster, Pascal; Froidevaux, Rénato
2017-02-01
Innovations in novel enzyme discoveries impact upon a wide range of industries for which biocatalysis and biotransformations represent a great challenge, i.e., food industry, polymers and chemical industry. Key tools and technologies, such as bioinformatics tools to guide mutant library design, molecular biology tools to create mutants library, microfluidics/microplates, parallel miniscale bioreactors and mass spectrometry technologies to create high-throughput screening methods and experimental design tools for screening and optimization, allow to evolve the discovery, development and implementation of enzymes and whole cells in (bio)processes. These technological innovations are also accompanied by the development and implementation of clean and sustainable integrated processes to meet the growing needs of chemical, pharmaceutical, environmental and biorefinery industries. This review gives an overview of the benefits of high-throughput screening approach from the discovery and engineering of biocatalysts to cell culture for optimizing their production in integrated processes and their extraction/purification.
Online games: a novel approach to explore how partial information influences human random searches
NASA Astrophysics Data System (ADS)
Martínez-García, Ricardo; Calabrese, Justin M.; López, Cristóbal
2017-01-01
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.
Online games: a novel approach to explore how partial information influences human random searches.
Martínez-García, Ricardo; Calabrese, Justin M; López, Cristóbal
2017-01-06
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.
Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H
2017-07-03
For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.
Welker, A; Wolcke, B; Schleppers, A; Schmeck, S B; Focke, U; Gervais, H W; Schmeck, J
2010-10-01
The introduction of the diagnosis-related groups reimbursement system has increased cost pressures. Due to the interaction of many different professional groups, analysis and optimization of internal coordination and scheduling in the operating room (OR) is mandatory. The aim of this study was to analyze the processes at a university hospital in order to optimize strategies by identifying potential weak points. Over a period 6 weeks before and 4 weeks after intervention processes time intervals in the OR of a tertiary care hospital (university hospital) were documented in a structured data collection sheet. The main reason for lack of efficiency of labor was underused OR utilization. Multifactorial reasons, particularly in the management of perioperative interfaces, led to vacant ORs. A significant deficit was in the use of OR capacity at the end of the daily OR schedule. After harmonization of working hours of different staff groups and implementation of several other changes an increase in efficiency could be verified. These results indicate that optimization of perioperative processes considerably contribute to the success of OR organization. Additionally, the implementation of standard operating procedures and a generally accepted OR statute are mandatory. In this way an efficient OR management can contribute to the economic success of a hospital.
Legrand, Yves-Marie; van der Lee, Arie; Barboiu, Mihail
2007-11-12
In this paper we report an extended series of 2,6-(iminoarene)pyridine-type ZnII complexes [(Lii)2Zn]II, which were surveyed for their ability to self-exchange both their ligands and their aromatic arms and to form different homoduplex and heteroduplex complexes in solution. The self-sorting of heteroduplex complexes is likely to be the result of geometric constraints. Whereas the imine-exchange process occurs quantitatively in 1:1 mixtures of [(Lii)2Zn]II complexes, the octahedral coordination process around the metal ion defines spatial-frustrated exchanges that involve the selective formation of heterocomplexes of two, by two different substituents; the bulkiest ones (pyrene in principle) specifically interact with the pseudoterpyridine core, sterically hindering the least bulky ones, which are intermolecularly stacked with similar ligands of neighboring molecules. Such a self-sorting process defined by the specific self-constitution of the ligands exchanging their aromatic substituents is self-optimized by a specific control over their spatial orientation around a metal center within the complex. They ultimately show an improved charge-transfer energy function by virtue of the dynamic amplification of self-optimized heteroduplex architectures. These systems therefore illustrate the convergence of the combinatorial self-sorting of the dynamic combinatorial libraries (DCLs) strategy and the constitutional self-optimized function.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Health benefit modelling and optimization of vehicular pollution control strategies
NASA Astrophysics Data System (ADS)
Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra
2012-12-01
This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific control strategies with maximization of social benefits, when these strategies are applied simultaneously.
NASA Astrophysics Data System (ADS)
Wang, Yan; Huang, Song; Ji, Zhicheng
2017-07-01
This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.
Optimizing Spacecraft Placement for Liaison Constellations
NASA Technical Reports Server (NTRS)
Chow, C. Channing; Villac, Benjamin F.; Lo, Martin W.
2011-01-01
A navigation and communications network is proposed to support an anticipated need for infrastructure in the Earth-Moon system. Periodic orbits will host the constellations while a novel, autonomous navigation strategy will guide the spacecraft along their path strictly based on satellite-to-satellite telemetry. In particular, this paper investigates the second stage of a larger constellation optimization scheme for multi-spacecraft systems. That is, following an initial orbit down-selection process, this analysis provides insights into the ancillary problem of spacecraft placement. Two case studies are presented that consider configurations of up to four spacecraft for a halo orbit and a cycler trajectory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Multi-level adaptive finite element methods. 1: Variation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1979-01-01
A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
A methodology to assess the economic impact of power storage technologies.
El-Ghandour, Laila; Johnson, Timothy C
2017-08-13
We present a methodology for assessing the economic impact of power storage technologies. The methodology is founded on classical approaches to the optimal stopping of stochastic processes but involves an innovation that circumvents the need to, ex ante , identify the form of a driving process and works directly on observed data, avoiding model risks. Power storage is regarded as a complement to the intermittent output of renewable energy generators and is therefore important in contributing to the reduction of carbon-intensive power generation. Our aim is to present a methodology suitable for use by policy makers that is simple to maintain, adaptable to different technologies and easy to interpret. The methodology has benefits over current techniques and is able to value, by identifying a viable optimal operational strategy, a conceived storage facility based on compressed air technology operating in the UK.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Matias-Guiu, Pau; Rodríguez-Bencomo, Juan José; Pérez-Correa, José R; López, Francisco
2018-04-15
Developing new distillation strategies can help the spirits industry to improve quality, safety and process efficiency. Batch stills equipped with a packed column and an internal partial condenser are an innovative experimental system, allowing a fast and flexible management of the rectification. In this study, the impact of four factors (heart-cut volume, head-cut volume, pH and cooling flow rate of the internal partial condenser during the head-cut fraction) on 18 major volatile compounds of Muscat spirits was optimized using response surface methodology and desirability function approaches. Results have shown that high rectification at the beginning of the heart-cut enhances the overall positive aroma compounds of the product, reducing off-flavor compounds. In contrast, optimum levels of heart-cut volume, head-cut volume and pH factors varied depending on the process goal. Finally, three optimal operational conditions (head off-flavors reduction, flowery terpenic enhancement and fruity ester enhancement) were evaluated by chemical and sensory analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Vaisali, C; Belur, Prasanna D; Regupathi, Iyyaswami
2017-10-01
Lipophilization of antioxidants is recognized as an effective strategy to enhance solubility and thus effectiveness in lipid based food. In this study, an effort was made to optimize rutin fatty ester synthesis in two different solvent systems to understand the influence of reaction system hydrophobicity on the optimum conditions using immobilised Candida antartica lipase. Under unoptimized conditions, 52.14% and 13.02% conversion was achieved in acetone and tert-butanol solvent systems, respectively. Among all the process parameters, water activity of the system was found to show highest influence on the conversion in each reaction system. In the presence of molecular sieves, the ester production increased to 62.9% in tert-butanol system, unlike acetone system. Under optimal conditions, conversion increased to 60.74% and 65.73% in acetone and tert-butanol system, respectively. This study shows, maintaining optimal water activity is crucial in reaction systems having polar solvents compared to more non-polar solvents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation
NASA Astrophysics Data System (ADS)
Krastev, Vladimir
2011-12-01
We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.
Wang, Huan; Dong, Peng; Liu, Hongcheng; Xing, Lei
2017-02-01
Current treatment planning remains a costly and labor intensive procedure and requires multiple trial-and-error adjustments of system parameters such as the weighting factors and prescriptions. The purpose of this work is to develop an autonomous treatment planning strategy with effective use of prior knowledge and in a clinically realistic treatment planning platform to facilitate radiation therapy workflow. Our technique consists of three major components: (i) a clinical treatment planning system (TPS); (ii) a formulation of decision-function constructed using an assemble of prior treatment plans; (iii) a plan evaluator or decision-function and an outer-loop optimization independent of the clinical TPS to assess the TPS-generated plan and to drive the search toward a solution optimizing the decision-function. Microsoft (MS) Visual Studio Coded UI is applied to record some common planner-TPS interactions as subroutines for querying and interacting with the TPS. These subroutines are called back in the outer-loop optimization program to navigate the plan selection process through the solution space iteratively. The utility of the approach is demonstrated by using clinical prostate and head-and-neck cases. An autonomous treatment planning technique with effective use of an assemble of prior treatment plans is developed to automatically maneuver the clinical treatment planning process in the platform of a commercial TPS. The process mimics the decision-making process of a human planner and provides a clinically sensible treatment plan automatically, thus reducing/eliminating the tedious manual trial-and-errors of treatment planning. It is found that the prostate and head-and-neck treatment plans generated using the approach compare favorably with that used for the patients' actual treatments. Clinical inverse treatment planning process can be automated effectively with the guidance of an assemble of prior treatment plans. The approach has the potential to significantly improve the radiation therapy workflow. © 2016 American Association of Physicists in Medicine.
A multiobjective optimization framework for multicontaminant industrial water network design.
Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge
2011-07-01
The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.