Sample records for optimal testing strategy

  1. GMOtrack: generator of cost-effective GMO testing strategies.

    PubMed

    Novak, Petra Krau; Gruden, Kristina; Morisset, Dany; Lavrac, Nada; Stebih, Dejan; Rotter, Ana; Zel, Jana

    2009-01-01

    Commercialization of numerous genetically modified organisms (GMOs) has already been approved worldwide, and several additional GMOs are in the approval process. Many countries have adopted legislation to deal with GMO-related issues such as food safety, environmental concerns, and consumers' right of choice, making GMO traceability a necessity. The growing extent of GMO testing makes it important to study optimal GMO detection and identification strategies. This paper formally defines the problem of routine laboratory-level GMO tracking as a cost optimization problem, thus proposing a shift from "the same strategy for all samples" to "sample-centered GMO testing strategies." An algorithm (GMOtrack) for finding optimal two-phase (screening-identification) testing strategies is proposed. The advantages of cost optimization with increasing GMO presence on the market are demonstrated, showing that optimization approaches to analytic GMO traceability can result in major cost reductions. The optimal testing strategies are laboratory-dependent, as the costs depend on prior probabilities of local GMO presence, which are exemplified on food and feed samples. The proposed GMOtrack approach, publicly available under the terms of the General Public License, can be extended to other domains where complex testing is involved, such as safety and quality assurance in the food supply chain.

  2. A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.

  3. Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice.

    PubMed

    Howes, Andrew; Duggan, Geoffrey B; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L

    2016-07-01

    It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are boundedly optimal. Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and external resources are boundedly optimal where optimality is defined relative to experience, cognitive constraints, and reward. The theory makes predictions that are tested against data, not fitted to it. The experiments use a no-choice/choice utility learning paradigm where the no-choice phase is used to elicit a profile of each participant's performance across the strategy space and the choice phase is used to test predicted choices within this space. They show that the majority of individuals select strategies that are boundedly optimal. Further, individual differences in what people choose to do are successfully predicted by the analysis. Two issues are discussed: (a) the performance of the minority of participants who did not find boundedly optimal adaptations, and (b) the possibility that individuals anticipate what, with practice, will become a bounded optimal strategy, rather than what is boundedly optimal during training. Copyright © 2015 Cognitive Science Society, Inc.

  4. Integrated testing strategies can be optimal for chemical risk classification.

    PubMed

    Raseta, Marko; Pitchford, Jon; Cussens, James; Doe, John

    2017-08-01

    There is an urgent need to refine strategies for testing the safety of chemical compounds. This need arises both from the financial and ethical costs of animal tests, but also from the opportunities presented by new in-vitro and in-silico alternatives. Here we explore the mathematical theory underpinning the formulation of optimal testing strategies in toxicology. We show how the costs and imprecisions of the various tests, and the variability in exposures and responses of individuals, can be assembled rationally to form a Markov Decision Problem. We compute the corresponding optimal policies using well developed theory based on Dynamic Programming, thereby identifying and overcoming some methodological and logical inconsistencies which may exist in the current toxicological testing. By illustrating our methods for two simple but readily generalisable examples we show how so-called integrated testing strategies, where information of different precisions from different sources is combined and where different initial test outcomes lead to different sets of future tests, can arise naturally as optimal policies. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Optimal use of colonoscopy and fecal immunochemical test for population-based colorectal cancer screening: a cost-effectiveness analysis using Japanese data.

    PubMed

    Sekiguchi, Masau; Igarashi, Ataru; Matsuda, Takahisa; Matsumoto, Minori; Sakamoto, Taku; Nakajima, Takeshi; Kakugawa, Yasuo; Yamamoto, Seiichiro; Saito, Hiroshi; Saito, Yutaka

    2016-02-01

    There have been few cost-effectiveness analyses of population-based colorectal cancer screening in Japan, and there is no consensus on the optimal use of total colonoscopy and the fecal immunochemical test for colorectal cancer screening with regard to cost-effectiveness and total colonoscopy workload. The present study aimed to examine the cost-effectiveness of colorectal cancer screening using Japanese data to identify the optimal use of total colonoscopy and fecal immunochemical test. We developed a Markov model to assess the cost-effectiveness of colorectal cancer screening offered to an average-risk population aged 40 years or over. The cost, quality-adjusted life-years and number of total colonoscopy procedures required were evaluated for three screening strategies: (i) a fecal immunochemical test-based strategy; (ii) a total colonoscopy-based strategy; (iii) a strategy of adding population-wide total colonoscopy at 50 years to a fecal immunochemical test-based strategy. All three strategies dominated no screening. Among the three, Strategy 1 was dominated by Strategy 3, and the incremental cost per quality-adjusted life-years gained for Strategy 2 against Strategies 1 and 3 were JPY 293 616 and JPY 781 342, respectively. Within the Japanese threshold (JPY 5-6 million per QALY gained), Strategy 2 was the most cost-effective, followed by Strategy 3; however, Strategy 2 required more than double the number of total colonoscopy procedures than the other strategies. The total colonoscopy-based strategy could be the most cost-effective for population-based colorectal cancer screening in Japan. However, it requires more total colonoscopy procedures than the other strategies. Depending on total colonoscopy capacity, the strategy of adding total colonoscopy for individuals at a specified age to a fecal immunochemical test-based screening may be an optimal solution. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. ODECS -- A computer code for the optimal design of S.I. engine control strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsie, I.; Pianese, C.; Rizzo, G.

    1996-09-01

    The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less

  7. Test scheduling optimization for 3D network-on-chip based on cloud evolutionary algorithm of Pareto multi-objective

    NASA Astrophysics Data System (ADS)

    Xu, Chuanpei; Niu, Junhao; Ling, Jing; Wang, Suyan

    2018-03-01

    In this paper, we present a parallel test strategy for bandwidth division multiplexing under the test access mechanism bandwidth constraint. The Pareto solution set is combined with a cloud evolutionary algorithm to optimize the test time and power consumption of a three-dimensional network-on-chip (3D NoC). In the proposed method, all individuals in the population are sorted in non-dominated order and allocated to the corresponding level. Individuals with extreme and similar characteristics are then removed. To increase the diversity of the population and prevent the algorithm from becoming stuck around local optima, a competition strategy is designed for the individuals. Finally, we adopt an elite reservation strategy and update the individuals according to the cloud model. Experimental results show that the proposed algorithm converges to the optimal Pareto solution set rapidly and accurately. This not only obtains the shortest test time, but also optimizes the power consumption of the 3D NoC.

  8. Dispositional optimism and coping strategies in patients with a kidney transplant.

    PubMed

    Costa-Requena, Gemma; Cantarell-Aixendri, M Carmen; Parramon-Puig, Gemma; Serón-Micas, Daniel

    2014-01-01

     Dispositional optimism is a personal resource that determines the coping style and adaptive response to chronic diseases. The aim of this study was to assess the correlations between dispositional optimism and coping strategies in patients with recent kidney transplantation and evaluate the differences in the use of coping strategies in accordance with the level of dispositional optimism.  Patients who were hospitalised in the nephrology department were selected consecutively after kidney transplantation was performed. The evaluation instruments were the Life Orientation Test-Revised, and the Coping Strategies Inventory. The data were analysed with central tendency measures, correlation analyses and means were compared using Student’s t-test.   66 patients with a kidney transplant participated in the study. The coping styles that characterised patients with a recent kidney transplantation were Social withdrawal and Problem avoidance. Correlations between dispositional optimism and coping strategies were significant in a positive direction in Problem-solving (p<.05) and Cognitive restructuring (p<.01), and inversely with Self-criticism (p<.05). Differences in dispositional optimism created significant differences in the Self-Criticism dimension (t=2.58; p<.01).  Dispositional optimism scores provide differences in coping responses after kidney transplantation. Moreover, coping strategies may influence the patient’s perception of emotional wellbeing after kidney transplantation.

  9. Design of Quiet Rotorcraft Approach Trajectories: Verification Phase

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    2010-01-01

    Flight testing that is planned for October 2010 will provide an opportunity to evaluate rotorcraft trajectory optimization techniques. The flight test will involve a fully instrumented MD-902 helicopter, which will be flown over an array of microphones. In this work, the helicopter approach trajectory is optimized via a multiobjective genetic algorithm to improve community noise, passenger comfort, and pilot acceptance. Previously developed optimization strategies are modified to accommodate new helicopter data and to increase pilot acceptance. This paper describes the MD-902 trajectory optimization plus general optimization strategies and modifications that are needed to reduce the uncertainty in noise predictions. The constraints that are imposed by the flight test conditions and characteristics of the MD-902 helicopter limit the testing possibilities. However, the insights that will be gained through this research will prove highly valuable.

  10. The extension of the thermal-vacuum test optimization program to multiple flights

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Byrd, J.

    1981-01-01

    The thermal vacuum test optimization model developed to provide an approach to the optimization of a test program based on prediction of flight performance with a single flight option in mind is extended to consider reflight as in space shuttle missions. The concept of 'utility', developed under the name of 'availability', is used to follow performance through the various options encountered when the capabilities of reflight and retrievability of space shuttle are available. Also, a 'lost value' model is modified to produce a measure of the probability of a mission's success, achieving a desired utility using a minimal cost test strategy. The resulting matrix of probabilities and their associated costs provides a means for project management to evaluate various test and reflight strategies.

  11. Operation management of daily economic dispatch using novel hybrid particle swarm optimization and gravitational search algorithm with hybrid mutation strategy

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Huang, Song; Ji, Zhicheng

    2017-07-01

    This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.

  12. Multi-strategy coevolving aging particle optimization.

    PubMed

    Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante

    2014-02-01

    We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.

  13. Optimal Verification of Entangled States with Local Measurements

    NASA Astrophysics Data System (ADS)

    Pallister, Sam; Linden, Noah; Montanaro, Ashley

    2018-04-01

    Consider the task of verifying that a given quantum device, designed to produce a particular entangled state, does indeed produce that state. One natural approach would be to characterize the output state by quantum state tomography, or alternatively, to perform some kind of Bell test, tailored to the state of interest. We show here that neither approach is optimal among local verification strategies for 2-qubit states. We find the optimal strategy in this case and show that quadratically fewer total measurements are needed to verify to within a given fidelity than in published results for quantum state tomography, Bell test, or fidelity estimation protocols. We also give efficient verification protocols for any stabilizer state. Additionally, we show that requiring that the strategy be constructed from local, nonadaptive, and noncollective measurements only incurs a constant-factor penalty over a strategy without these restrictions.

  14. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  15. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering

    PubMed Central

    Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690

  16. The optimal imaging strategy for patients with stable chest pain: a cost-effectiveness analysis.

    PubMed

    Genders, Tessa S S; Petersen, Steffen E; Pugliese, Francesca; Dastidar, Amardeep G; Fleischmann, Kirsten E; Nieman, Koen; Hunink, M G Myriam

    2015-04-07

    The optimal imaging strategy for patients with stable chest pain is uncertain. To determine the cost-effectiveness of different imaging strategies for patients with stable chest pain. Microsimulation state-transition model. Published literature. 60-year-old patients with a low to intermediate probability of coronary artery disease (CAD). Lifetime. The United States, the United Kingdom, and the Netherlands. Coronary computed tomography (CT) angiography, cardiac stress magnetic resonance imaging, stress single-photon emission CT, and stress echocardiography. Lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios. The strategy that maximized QALYs and was cost-effective in the United States and the Netherlands began with coronary CT angiography, continued with cardiac stress imaging if angiography found at least 50% stenosis in at least 1 coronary artery, and ended with catheter-based coronary angiography if stress imaging induced ischemia of any severity. For U.K. men, the preferred strategy was optimal medical therapy without catheter-based coronary angiography if coronary CT angiography found only moderate CAD or stress imaging induced only mild ischemia. In these strategies, stress echocardiography was consistently more effective and less expensive than other stress imaging tests. For U.K. women, the optimal strategy was stress echocardiography followed by catheter-based coronary angiography if echocardiography induced mild or moderate ischemia. Results were sensitive to changes in the probability of CAD and assumptions about false-positive results. All cardiac stress imaging tests were assumed to be available. Exercise electrocardiography was included only in a sensitivity analysis. Differences in QALYs among strategies were small. Coronary CT angiography is a cost-effective triage test for 60-year-old patients who have nonacute chest pain and a low to intermediate probability of CAD. Erasmus University Medical Center.

  17. Predicting Short-Term Remembering as Boundedly Optimal Strategy Choice

    ERIC Educational Resources Information Center

    Howes, Andrew; Duggan, Geoffrey B.; Kalidindi, Kiran; Tseng, Yuan-Chi; Lewis, Richard L.

    2016-01-01

    It is known that, on average, people adapt their choice of memory strategy to the subjective utility of interaction. What is not known is whether an individual's choices are "boundedly optimal." Two experiments are reported that test the hypothesis that an individual's decisions about the distribution of remembering between internal and…

  18. A new inertia weight control strategy for particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Xianming; Wang, Hongbo

    2018-04-01

    Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.

  19. Development of a codon optimization strategy using the efor RED reporter gene as a test case

    NASA Astrophysics Data System (ADS)

    Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila

    2018-04-01

    Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.

  20. Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)

    NASA Astrophysics Data System (ADS)

    Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas

    2017-06-01

    In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.

  1. Influence of Fallible Item Parameters on Test Information During Adaptive Testing.

    ERIC Educational Resources Information Center

    Wetzel, C. Douglas; McBride, James R.

    Computer simulation was used to assess the effects of item parameter estimation errors on different item selection strategies used in adaptive and conventional testing. To determine whether these effects reduced the advantages of certain optimal item selection strategies, simulations were repeated in the presence and absence of item parameter…

  2. Auto-DR and Pre-cooling of Buildings at Tri-City Corporate Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Rongxin; Xu, Peng; Kiliccote, Sila

    2008-11-01

    Over the several past years, Lawrence Berkeley National Laboratory (LBNL) has conducted field tests for different pre-cooling strategies in different commercial buildings within California. The test results indicated that pre-cooling strategies were effective in reducing electric demand in these buildings during peak periods. This project studied how to optimize pre-cooling strategies for eleven buildings in the Tri-City Corporate Center, San Bernardino, California with the assistance of a building energy simulation tool -- the Demand Response Quick Assessment Tool (DRQAT) developed by LBNL's Demand Response Research Center funded by the California Energy Commission's Public Interest Energy Research (PIER) Program. From themore » simulation results of these eleven buildings, optimal pre-cooling and temperature reset strategies were developed. The study shows that after refining and calibrating initial models with measured data, the accuracy of the models can be greatly improved and the models can be used to predict load reductions for automated demand response (Auto-DR) events. This study summarizes the optimization experience of the procedure to develop and calibrate building models in DRQAT. In order to confirm the actual effect of demand response strategies, the simulation results were compared to the field test data. The results indicated that the optimal demand response strategies worked well for all buildings in the Tri-City Corporate Center. This study also compares DRQAT with other building energy simulation tools (eQUEST and BEST). The comparison indicate that eQUEST and BEST underestimate the actual demand shed of the pre-cooling strategies due to a flaw in DOE2's simulation engine for treating wall thermal mass. DRQAT is a more accurate tool in predicting thermal mass effects of DR events.« less

  3. Pareto fronts for multiobjective optimization design on materials data

    NASA Astrophysics Data System (ADS)

    Gopakumar, Abhijith; Balachandran, Prasanna; Gubernatis, James E.; Lookman, Turab

    Optimizing multiple properties simultaneously is vital in materials design. Here we apply infor- mation driven, statistical optimization strategies blended with machine learning methods, to address multi-objective optimization tasks on materials data. These strategies aim to find the Pareto front consisting of non-dominated data points from a set of candidate compounds with known character- istics. The objective is to find the pareto front in as few additional measurements or calculations as possible. We show how exploration of the data space to find the front is achieved by using uncer- tainties in predictions from regression models. We test our proposed design strategies on multiple, independent data sets including those from computations as well as experiments. These include data sets for Max phases, piezoelectrics and multicomponent alloys.

  4. A reliability as an independent variable (RAIV) methodology for optimizing test planning for liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Strunz, Richard; Herrmann, Jeffrey W.

    2011-12-01

    The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.

  5. Integrating non-animal test information into an adaptive testing strategy - skin sensitization proof of concept case.

    PubMed

    Jaworska, Joanna; Harol, Artsiom; Kern, Petra S; Gerberick, G Frank

    2011-01-01

    There is an urgent need to develop data integration and testing strategy frameworks allowing interpretation of results from animal alternative test batteries. To this end, we developed a Bayesian Network Integrated Testing Strategy (BN ITS) with the goal to estimate skin sensitization hazard as a test case of previously developed concepts (Jaworska et al., 2010). The BN ITS combines in silico, in chemico, and in vitro data related to skin penetration, peptide reactivity, and dendritic cell activation, and guides testing strategy by Value of Information (VoI). The approach offers novel insights into testing strategies: there is no one best testing strategy, but the optimal sequence of tests depends on information at hand, and is chemical-specific. Thus, a single generic set of tests as a replacement strategy is unlikely to be most effective. BN ITS offers the possibility of evaluating the impact of generating additional data on the target information uncertainty reduction before testing is commenced.

  6. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  7. Active model-based balancing strategy for self-reconfigurable batteries

    NASA Astrophysics Data System (ADS)

    Bouchhima, Nejmeddine; Schnierle, Marc; Schulte, Sascha; Birke, Kai Peter

    2016-08-01

    This paper describes a novel balancing strategy for self-reconfigurable batteries where the discharge and charge rates of each cell can be controlled. While much effort has been focused on improving the hardware architecture of self-reconfigurable batteries, energy equalization algorithms have not been systematically optimized in terms of maximizing the efficiency of the balancing system. Our approach includes aspects of such optimization theory. We develop a balancing strategy for optimal control of the discharge rate of battery cells. We first formulate the cell balancing as a nonlinear optimal control problem, which is modeled afterward as a network program. Using dynamic programming techniques and MATLAB's vectorization feature, we solve the optimal control problem by generating the optimal battery operation policy for a given drive cycle. The simulation results show that the proposed strategy efficiently balances the cells over the life of the battery, an obvious advantage that is absent in the other conventional approaches. Our algorithm is shown to be robust when tested against different influencing parameters varying over wide spectrum on different drive cycles. Furthermore, due to the little computation time and the proved low sensitivity to the inaccurate power predictions, our strategy can be integrated in a real-time system.

  8. Optimization of the MINERVA Exoplanet Search Strategy via Simulations

    NASA Astrophysics Data System (ADS)

    Nava, Chantell; Johnson, Samson; McCrady, Nate; Minerva

    2015-01-01

    Detection of low-mass exoplanets requires high spectroscopic precision and high observational cadence. MINERVA is a dedicated observatory capable of sub meter-per-second radial velocity precision. As a dedicated observatory, MINERVA can observe with every-clear-night cadence that is essential for low-mass exoplanet detection. However, this cadence complicates the determination of an optimal observing strategy. We simulate MINERVA observations to optimize our observing strategy and maximize exoplanet detections. A dispatch scheduling algorithm provides observations of MINERVA targets every day over a three-year observing campaign. An exoplanet population with a distribution informed by Kepler statistics is assigned to the targets, and radial velocity curves induced by the planets are constructed. We apply a correlated noise model that realistically simulates stellar astrophysical noise sources. The simulated radial velocity data is fed to the MINERVA planet detection code and the expected exoplanet yield is calculated. The full simulation provides a tool to test different strategies for scheduling observations of our targets and optimizing the MINERVA exoplanet search strategy.

  9. Truss topology optimization with simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Sankaranarayanan, S.; Haftka, Raphael T.; Kapania, Rakesh K.

    1992-01-01

    Strategies for topology optimization of trusses for minimum weight subject to stress and displacement constraints by Simultaneous Analysis and Design (SAND) are considered. The ground structure approach is used. A penalty function formulation of SAND is compared with an augmented Lagrangian formulation. The efficiency of SAND in handling combinations of general constraints is tested. A strategy for obtaining an optimal topology by minimizing the compliance of the truss is compared with a direct weight minimization solution to satisfy stress and displacement constraints. It is shown that for some problems, starting from the ground structure and using SAND is better than starting from a minimum compliance topology design and optimizing only the cross sections for minimum weight under stress and displacement constraints. A member elimination strategy to save CPU time is discussed.

  10. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  11. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  12. Cost-effectiveness of cervical cancer screening with primary human papillomavirus testing in Norway.

    PubMed

    Burger, E A; Ortendahl, J D; Sy, S; Kristiansen, I S; Kim, J J

    2012-04-24

    New screening technologies and vaccination against human papillomavirus (HPV), the necessary cause of cervical cancer, may impact optimal approaches to prevent cervical cancer. We evaluated the cost-effectiveness of alternative screening strategies to inform cervical cancer prevention guidelines in Norway. We leveraged the primary epidemiologic and economic data from Norway to contextualise a simulation model of HPV-induced cervical cancer. The current cytology-only screening was compared with strategies involving cytology at younger ages and primary HPV-based screening at older ages (31/34+ years), an option being actively deliberated by the Norwegian government. We varied the switch-age, screening interval, and triage strategies for women with HPV-positive results. Uncertainty was evaluated in sensitivity analysis. Current cytology-only screening was less effective and more costly than strategies that involve switching to primary HPV testing in older ages. For unvaccinated women, switching at age 34 years to primary HPV testing every 4 years was optimal given the Norwegian cost-effectiveness threshold ($83,000 per year of life saved). For vaccinated women, a 6-year screening interval was cost-effective. When we considered a wider range of strategies, we found that an earlier switch to HPV testing (at age 31 years) may be preferred. Strategies involving a switch to HPV testing for primary screening in older women is expected to be cost-effective compared with current recommendations in Norway.

  13. Discriminative motif optimization based on perceptron training

    PubMed Central

    Patel, Ronak Y.; Stormo, Gary D.

    2014-01-01

    Motivation: Generating accurate transcription factor (TF) binding site motifs from data generated using the next-generation sequencing, especially ChIP-seq, is challenging. The challenge arises because a typical experiment reports a large number of sequences bound by a TF, and the length of each sequence is relatively long. Most traditional motif finders are slow in handling such enormous amount of data. To overcome this limitation, tools have been developed that compromise accuracy with speed by using heuristic discrete search strategies or limited optimization of identified seed motifs. However, such strategies may not fully use the information in input sequences to generate motifs. Such motifs often form good seeds and can be further improved with appropriate scoring functions and rapid optimization. Results: We report a tool named discriminative motif optimizer (DiMO). DiMO takes a seed motif along with a positive and a negative database and improves the motif based on a discriminative strategy. We use area under receiver-operating characteristic curve (AUC) as a measure of discriminating power of motifs and a strategy based on perceptron training that maximizes AUC rapidly in a discriminative manner. Using DiMO, on a large test set of 87 TFs from human, drosophila and yeast, we show that it is possible to significantly improve motifs identified by nine motif finders. The motifs are generated/optimized using training sets and evaluated on test sets. The AUC is improved for almost 90% of the TFs on test sets and the magnitude of increase is up to 39%. Availability and implementation: DiMO is available at http://stormo.wustl.edu/DiMO Contact: rpatel@genetics.wustl.edu, ronakypatel@gmail.com PMID:24369152

  14. Optimization Under Uncertainty for Wake Steering Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Annoni, Jennifer; King, Ryan N.

    Here, wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in themore » presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less

  15. Optimization Under Uncertainty for Wake Steering Strategies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Annoni, Jennifer; King, Ryan N

    Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presencemore » of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less

  16. Optimization Under Uncertainty for Wake Steering Strategies

    NASA Astrophysics Data System (ADS)

    Quick, Julian; Annoni, Jennifer; King, Ryan; Dykes, Katherine; Fleming, Paul; Ning, Andrew

    2017-05-01

    Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as “wake steering,” in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.

  17. Optimization Under Uncertainty for Wake Steering Strategies

    DOE PAGES

    Quick, Julian; Annoni, Jennifer; King, Ryan N.; ...

    2017-06-13

    Here, wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in themore » presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.« less

  18. Stochastic injection-strategy optimization for the preliminary assessment of candidate geological storage sites

    NASA Astrophysics Data System (ADS)

    Cody, Brent M.; Baù, Domenico; González-Nicolás, Ana

    2015-09-01

    Geological carbon sequestration (GCS) has been identified as having the potential to reduce increasing atmospheric concentrations of carbon dioxide (CO2). However, a global impact will only be achieved if GCS is cost-effectively and safely implemented on a massive scale. This work presents a computationally efficient methodology for identifying optimal injection strategies at candidate GCS sites having uncertainty associated with caprock permeability, effective compressibility, and aquifer permeability. A multi-objective evolutionary optimization algorithm is used to heuristically determine non-dominated solutions between the following two competing objectives: (1) maximize mass of CO2 sequestered and (2) minimize project cost. A semi-analytical algorithm is used to estimate CO2 leakage mass rather than a numerical model, enabling the study of GCS sites having vastly different domain characteristics. The stochastic optimization framework presented herein is applied to a feasibility study of GCS in a brine aquifer in the Michigan Basin (MB), USA. Eight optimization test cases are performed to investigate the impact of decision-maker (DM) preferences on Pareto-optimal objective-function values and carbon-injection strategies. This analysis shows that the feasibility of GCS at the MB test site is highly dependent upon the DM's risk-adversity preference and degree of uncertainty associated with caprock integrity. Finally, large gains in computational efficiency achieved using parallel processing and archiving are discussed.

  19. Experimental Test Rig for Optimal Control of Flexible Space Robotic Arms

    DTIC Science & Technology

    2016-12-01

    was used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link...used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link... designed to perform the experimentation . The first and second concepts use traditional elastic springs in varying configurations while a third uses a

  20. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  1. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  2. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  3. Development of industry-based strategies for motivating seat-belt usage

    DOT National Transportation Integrated Search

    1983-03-01

    A variety of incentive-based programs to motivate safety belt use were tested during the 18-month grant period in order to define optimal incentive strategies for particular corporate settings. Initial programs provoked important research questions w...

  4. Nearly ideal binary communication in squeezed channels

    NASA Astrophysics Data System (ADS)

    Paris, Matteo G.

    2001-07-01

    We analyze the effect of squeezing the channel in binary communication based on Gaussian states. We show that for coding on pure states, squeezing increases the detection probability at fixed size of the strategy, actually saturating the optimal bound already for moderate signal energy. Using Neyman-Pearson lemma for fuzzy hypothesis testing we are able to analyze also the case of mixed states, and to find the optimal amount of squeezing that can be effectively employed. It results that optimally squeezed channels are robust against signal mixing, and largely improve the strategy power by comparison with coherent ones.

  5. Cost-effectiveness of cervical cancer screening with primary human papillomavirus testing in Norway

    PubMed Central

    Burger, E A; Ortendahl, J D; Sy, S; Kristiansen, I S; Kim, J J

    2012-01-01

    Background: New screening technologies and vaccination against human papillomavirus (HPV), the necessary cause of cervical cancer, may impact optimal approaches to prevent cervical cancer. We evaluated the cost-effectiveness of alternative screening strategies to inform cervical cancer prevention guidelines in Norway. Methods: We leveraged the primary epidemiologic and economic data from Norway to contextualise a simulation model of HPV-induced cervical cancer. The current cytology-only screening was compared with strategies involving cytology at younger ages and primary HPV-based screening at older ages (31/34+ years), an option being actively deliberated by the Norwegian government. We varied the switch-age, screening interval, and triage strategies for women with HPV-positive results. Uncertainty was evaluated in sensitivity analysis. Results: Current cytology-only screening was less effective and more costly than strategies that involve switching to primary HPV testing in older ages. For unvaccinated women, switching at age 34 years to primary HPV testing every 4 years was optimal given the Norwegian cost-effectiveness threshold ($83 000 per year of life saved). For vaccinated women, a 6-year screening interval was cost-effective. When we considered a wider range of strategies, we found that an earlier switch to HPV testing (at age 31 years) may be preferred. Conclusions: Strategies involving a switch to HPV testing for primary screening in older women is expected to be cost-effective compared with current recommendations in Norway. PMID:22441643

  6. Research on Operation Strategy for Bundled Wind-thermal Generation Power Systems Based on Two-Stage Optimization Model

    NASA Astrophysics Data System (ADS)

    Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu

    2017-05-01

    Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.

  7. Optimal placement of tuning masses for vibration reduction in helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1988-01-01

    Described are methods for reducing vibration in helicopter rotor blades by determining optimum sizes and locations of tuning masses through formal mathematical optimization techniques. An optimization procedure is developed which employs the tuning masses and corresponding locations as design variables which are systematically changed to achieve low values of shear without a large mass penalty. The finite-element structural analysis of the blade and the optimization formulation require development of discretized expressions for two performance parameters: modal shaping parameter and modal shear amplitude. Matrix expressions for both quantities and their sensitivity derivatives are developed. Three optimization strategies are developed and tested. The first is based on minimizing the modal shaping parameter which indirectly reduces the modal shear amplitudes corresponding to each harmonic of airload. The second strategy reduces these amplitudes directly, and the third strategy reduces the shear as a function of time during a revolution of the blade. The first strategy works well for reducing the shear for one mode responding to a single harmonic of the airload, but has been found in some cases to be ineffective for more than one mode. The second and third strategies give similar results and show excellent reduction of the shear with a low mass penalty.

  8. Motor planning under temporal uncertainty is suboptimal when the gain function is asymmetric

    PubMed Central

    Ota, Keiji; Shinya, Masahiro; Kudo, Kazutoshi

    2015-01-01

    For optimal action planning, the gain/loss associated with actions and the variability in motor output should both be considered. A number of studies make conflicting claims about the optimality of human action planning but cannot be reconciled due to their use of different movements and gain/loss functions. The disagreement is possibly because of differences in the experimental design and differences in the energetic cost of participant motor effort. We used a coincident timing task, which requires decision making with constant energetic cost, to test the optimality of participant's timing strategies under four configurations of the gain function. We compared participant strategies to an optimal timing strategy calculated from a Bayesian model that maximizes the expected gain. We found suboptimal timing strategies under two configurations of the gain function characterized by asymmetry, in which higher gain is associated with higher risk of zero gain. Participants showed a risk-seeking strategy by responding closer than optimal to the time of onset/offset of zero gain. Meanwhile, there was good agreement of the model with actual performance under two configurations of the gain function characterized by symmetry. Our findings show that human ability to make decisions that must reflect uncertainty in one's own motor output has limits that depend on the configuration of the gain function. PMID:26236227

  9. Robust Airfoil Optimization in High Resolution Design Space

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon L.

    2003-01-01

    The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of B-spline control points as design variables yet the resulting airfoil shape is fairly smooth, and (3) it allows the user to make a trade-off between the level of optimization and the amount of computing time consumed. The robust optimization method is demonstrated by solving a lift-constrained drag minimization problem for a two-dimensional airfoil in viscous flow with a large number of geometric design variables. Our experience with robust optimization indicates that our strategy produces reasonable airfoil shapes that are similar to the original airfoils, but these new shapes provide drag reduction over the specified range of Mach numbers. We have tested this strategy on a number of advanced airfoil models produced by knowledgeable aerodynamic design team members and found that our strategy produces airfoils better or equal to any designs produced by traditional design methods.

  10. An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm.

    PubMed

    Zhu, Qingling; Lin, Qiuzhen; Chen, Weineng; Wong, Ka-Chun; Coello Coello, Carlos A; Li, Jianqiang; Chen, Jianyong; Zhang, Jun

    2017-09-01

    The selection of swarm leaders (i.e., the personal best and global best), is important in the design of a multiobjective particle swarm optimization (MOPSO) algorithm. Such leaders are expected to effectively guide the swarm to approach the true Pareto optimal front. In this paper, we present a novel external archive-guided MOPSO algorithm (AgMOPSO), where the leaders for velocity update are all selected from the external archive. In our algorithm, multiobjective optimization problems (MOPs) are transformed into a set of subproblems using a decomposition approach, and then each particle is assigned accordingly to optimize each subproblem. A novel archive-guided velocity update method is designed to guide the swarm for exploration, and the external archive is also evolved using an immune-based evolutionary strategy. These proposed approaches speed up the convergence of AgMOPSO. The experimental results fully demonstrate the superiority of our proposed AgMOPSO in solving most of the test problems adopted, in terms of two commonly used performance measures. Moreover, the effectiveness of our proposed archive-guided velocity update method and immune-based evolutionary strategy is also experimentally validated on more than 30 test MOPs.

  11. Optimizing point-of-care testing in clinical systems management.

    PubMed

    Kost, G J

    1998-01-01

    The goal of improving medical and economic outcomes calls for leadership based on fundamental principles. The manager of clinical systems works collaboratively within the acute care center to optimize point-of-care testing through systematic approaches such as integrative strategies, algorithms, and performance maps. These approaches are effective and efficacious for critically ill patients. Optimizing point-of-care testing throughout the entire health-care system is inherently more difficult. There is potential to achieve high-quality testing, integrated disease management, and equitable health-care delivery. Despite rapid change and economic uncertainty, a macro-strategic, information-integrated, feedback-systems, outcomes-oriented approach is timely, challenging, effective, and uplifting to the creative human spirit.

  12. Cost-Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.

    PubMed

    Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris

    2017-12-27

    Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared to the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon microsimulation model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess whether optimal screening strategies would change. Colonoscopy every 5 years, starting at age 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in population is not clear. Using a Microsimulation Screening Analysis-Colon microsimulation model, we found screening of patients with cystic fibrosis for CRC to be cost-effective. Due to the higher risk in these patients for CRC, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  13. Cost Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.

    PubMed

    Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris

    2018-02-01

    Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared with the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess if optimal screening strategies would change. Colonoscopy every 5 years, starting at an age of 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in this population is not clear. Using a Microsimulation Screening Analysis-Colon model, we found screening of patients with cystic fibrosis for CRC to be cost effective. Because of the higher risk of CRC in these patients, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  14. Counteracting Obstacles with Optimistic Predictions

    ERIC Educational Resources Information Center

    Zhang, Ying; Fishbach, Ayelet

    2010-01-01

    This research tested for counteractive optimism: a self-control strategy of generating optimistic predictions of future goal attainment in order to overcome anticipated obstacles in goal pursuit. In support of the counteractive optimism model, participants in 5 studies predicted better performance, more time invested in goal activities, and lower…

  15. Dispositional optimism and coping with pain.

    PubMed

    Bargiel-Matusiewicz, K; Krzyszkowska, A

    2009-12-07

    The aim of this article is to analyze the relation between dispositional optimism and coping with chronic pain. The study seeks to define the relation between life orientation (optimism vs. pessimism) and coping with pain (believes about pain control and the choice of coping strategy). The following questionnaires were used: LOT-R - Life Orientation Test, BPCQ - The Beliefs about Pain Control Questionnaire and CSQ - The Pain Coping Strategies Questionnaire. The results show that dispositional optimism correlates positively with: internal locus of pain control r=0.6, P<0.01; declared coping with pain r=0.38, P<0.05; diverting attention r = 0.93, P<0.01; and behavioral activity r = 0.82, P<0.01. Dispositional optimism correlates negatively with catastrophizing r = -0.28, P<0.05. We conclude that dispositional optimism plays a key role in forming the mechanisms of coping with chronic pain and thereby in improving the psychophysical comfort of patients.

  16. An Examination of Strategy Implementation During Abstract Nonlinguistic Category Learning in Aphasia.

    PubMed

    Vallila-Rohter, Sofia; Kiran, Swathi

    2015-08-01

    Our purpose was to study strategy use during nonlinguistic category learning in aphasia. Twelve control participants without aphasia and 53 participants with aphasia (PWA) completed a computerized feedback-based category learning task consisting of training and testing phases. Accuracy rates of categorization in testing phases were calculated. To evaluate strategy use, strategy analyses were conducted over training and testing phases. Participant data were compared with model data that simulated complex multi-cue, single feature, and random pattern strategies. Learning success and strategy use were evaluated within the context of standardized cognitive-linguistic assessments. Categorization accuracy was higher among control participants than among PWA. The majority of control participants implemented suboptimal or optimal multi-cue and single-feature strategies by testing phases of the experiment. In contrast, a large subgroup of PWA implemented random patterns, or no strategy, during both training and testing phases of the experiment. Person-to-person variability arises not only in category learning ability but also in the strategies implemented to complete category learning tasks. PWA less frequently developed effective strategies during category learning tasks than control participants. Certain PWA may have impairments of strategy development or feedback processing not captured by language and currently probed cognitive abilities.

  17. Optimal combinations of control strategies and cost-effective analysis for visceral leishmaniasis disease transmission.

    PubMed

    Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M; Chattopadhyay, Joydev; Sarkar, Ram Rup

    2017-01-01

    Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.

  18. Dynamic Portfolio Strategy Using Clustering Approach

    PubMed Central

    Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333

  19. Dynamic Portfolio Strategy Using Clustering Approach.

    PubMed

    Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.

  20. Sub-problem Optimization With Regression and Neural Network Approximators

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Hopkins, Dale A.; Patnaik, Surya N.

    2003-01-01

    Design optimization of large systems can be attempted through a sub-problem strategy. In this strategy, the original problem is divided into a number of smaller problems that are clustered together to obtain a sequence of sub-problems. Solution to the large problem is attempted iteratively through repeated solutions to the modest sub-problems. This strategy is applicable to structures and to multidisciplinary systems. For structures, clustering the substructures generates the sequence of sub-problems. For a multidisciplinary system, individual disciplines, accounting for coupling, can be considered as sub-problems. A sub-problem, if required, can be further broken down to accommodate sub-disciplines. The sub-problem strategy is being implemented into the NASA design optimization test bed, referred to as "CometBoards." Neural network and regression approximators are employed for reanalysis and sensitivity analysis calculations at the sub-problem level. The strategy has been implemented in sequential as well as parallel computational environments. This strategy, which attempts to alleviate algorithmic and reanalysis deficiencies, has the potential to become a powerful design tool. However, several issues have to be addressed before its full potential can be harnessed. This paper illustrates the strategy and addresses some issues.

  1. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    PubMed

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2  = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.

  2. Ant Navigation: Fractional Use of the Home Vector

    PubMed Central

    Cheung, Allen; Hiby, Lex; Narendra, Ajay

    2012-01-01

    Home is a special location for many animals, offering shelter from the elements, protection from predation, and a common place for gathering of the same species. Not surprisingly, many species have evolved efficient, robust homing strategies, which are used as part of each and every foraging journey. A basic strategy used by most animals is to take the shortest possible route home by accruing the net distances and directions travelled during foraging, a strategy well known as path integration. This strategy is part of the navigation toolbox of ants occupying different landscapes. However, when there is a visual discrepancy between test and training conditions, the distance travelled by animals relying on the path integrator varies dramatically between species: from 90% of the home vector to an absolute distance of only 50 cm. We here ask what the theoretically optimal balance between PI-driven and landmark-driven navigation should be. In combination with well-established results from optimal search theory, we show analytically that this fractional use of the home vector is an optimal homing strategy under a variety of circumstances. Assuming there is a familiar route that an ant recognizes, theoretically optimal search should always begin at some fraction of the home vector, depending on the region of familiarity. These results are shown to be largely independent of the search algorithm used. Ant species from different habitats appear to have optimized their navigation strategy based on the availability and nature of navigational information content in their environment. PMID:23209744

  3. Optically Based Rapid Screening Method for Proven Optimal Treatment Strategies Before Treatment Begins

    DTIC Science & Technology

    to rapidly test /screen breast cancer therapeutics as a strategy to streamline drug development and provide individualized treatment. The results...system can therefore be used to streamline pre-clinical drug development, by reducing the number of animals , cost, and time required to screen new drugs

  4. Using genetic algorithms to determine near-optimal pricing, investment and operating strategies in the electric power industry

    NASA Astrophysics Data System (ADS)

    Wu, Dongjun

    Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.

  5. Comprehensive clone screening and evaluation of fed-batch strategies in a microbioreactor and lab scale stirred tank bioreactor system: application on Pichia pastoris producing Rhizopus oryzae lipase

    PubMed Central

    2014-01-01

    Background In Pichia pastoris bioprocess engineering, classic approaches for clone selection and bioprocess optimization at small/micro scale using the promoter of the alcohol oxidase 1 gene (PAOX1), induced by methanol, present low reproducibility leading to high time and resource consumption. Results An automated microfermentation platform (RoboLector) was successfully tested to overcome the chronic problems of clone selection and optimization of fed-batch strategies. Different clones from Mut+P. pastoris phenotype strains expressing heterologous Rhizopus oryzae lipase (ROL), including a subset also overexpressing the transcription factor HAC1, were tested to select the most promising clones. The RoboLector showed high performance for the selection and optimization of cultivation media with minimal cost and time. Syn6 medium was better than conventional YNB medium in terms of production of heterologous protein. The RoboLector microbioreactor was also tested for different fed-batch strategies with three clones producing different lipase levels. Two mixed substrates fed-batch strategies were evaluated. The first strategy was the enzymatic release of glucose from a soluble glucose polymer by a glucosidase, and methanol addition every 24 hours. The second strategy used glycerol as co-substrate jointly with methanol at two different feeding rates. The implementation of these simple fed-batch strategies increased the levels of lipolytic activity 80-fold compared to classical batch strategies used in clone selection. Thus, these strategies minimize the risk of errors in the clone selection and increase the detection level of the desired product. Finally, the performance of two fed-batch strategies was compared for lipase production between the RoboLector microbioreactor and 5 liter stirred tank bioreactor for three selected clones. In both scales, the same clone ranking was achieved. Conclusion The RoboLector showed excellent performance in clone selection of P. pastoris Mut+ phenotype. The use of fed-batch strategies using mixed substrate feeds resulted in increased biomass and lipolytic activity. The automated processing of fed-batch strategies by the RoboLector considerably facilitates the operation of fermentation processes, while reducing error-prone clone selection by increasing product titers. The scale-up from microbioreactor to lab scale stirred tank bioreactor showed an excellent correlation, validating the use of microbioreactor as a powerful tool for evaluating fed-batch operational strategies. PMID:24606982

  6. Testing models of parental investment strategy and offspring size in ants.

    PubMed

    Gilboa, Smadar; Nonacs, Peter

    2006-01-01

    Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.

  7. Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.

    PubMed

    Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin

    2012-06-10

    The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.

  8. Efficiency, equity and feasibility of strategies to identify the poor: an application to premium exemptions under National Health Insurance in Ghana.

    PubMed

    Jehu-Appiah, Caroline; Aryeetey, Genevieve; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob

    2010-05-01

    This paper outlines the potential strategies to identify the poor, and assesses their feasibility, efficiency and equity. Analyses are illustrated for the case of premium exemptions under National Health Insurance (NHI) in Ghana. A literature search in Medline search was performed to identify strategies to identify the poor. Models were developed including information on demography and poverty, and costs and errors of in- and exclusion of these strategies in two regions in Ghana. Proxy means testing (PMT), participatory welfare ranking (PWR), and geographic targeting (GT) are potentially useful strategies to identify the poor, and vary in terms of their efficiency, equity and feasibility. Costs to exempt one poor individual range between US$11.63 and US$66.67, and strategies may exclude up to 25% of the poor. Feasibility of strategies is dependent on their aptness in rural/urban settings, and administrative capacity to implement. A decision framework summarizes the above information to guide policy making. We recommend PMT as an optimal strategy in relative low poverty incidence urbanized settings, PWR as an optimal strategy in relative low poverty incidence rural settings, and GT as an optimal strategy in high incidence poverty settings. This paper holds important lessons not only for NHI in Ghana but also for other countries implementing exemption policies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  9. An Examination of Strategy Implementation During Abstract Nonlinguistic Category Learning in Aphasia

    PubMed Central

    Kiran, Swathi

    2015-01-01

    Purpose Our purpose was to study strategy use during nonlinguistic category learning in aphasia. Method Twelve control participants without aphasia and 53 participants with aphasia (PWA) completed a computerized feedback-based category learning task consisting of training and testing phases. Accuracy rates of categorization in testing phases were calculated. To evaluate strategy use, strategy analyses were conducted over training and testing phases. Participant data were compared with model data that simulated complex multi-cue, single feature, and random pattern strategies. Learning success and strategy use were evaluated within the context of standardized cognitive–linguistic assessments. Results Categorization accuracy was higher among control participants than among PWA. The majority of control participants implemented suboptimal or optimal multi-cue and single-feature strategies by testing phases of the experiment. In contrast, a large subgroup of PWA implemented random patterns, or no strategy, during both training and testing phases of the experiment. Conclusions Person-to-person variability arises not only in category learning ability but also in the strategies implemented to complete category learning tasks. PWA less frequently developed effective strategies during category learning tasks than control participants. Certain PWA may have impairments of strategy development or feedback processing not captured by language and currently probed cognitive abilities. PMID:25908438

  10. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  11. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  12. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  13. Adaptive and Qualitative Changes in Encoding Strategy With Experience: Evidence From the Test-Expectancy Paradigm

    PubMed Central

    Finley, Jason R.; Benjamin, Aaron S.

    2012-01-01

    Three experiments demonstrated learners’ abilities to adaptively and qualitatively accommodate their encoding strategies to the demands of an upcoming test. Stimuli were word pairs. In Experiment 1, test expectancy was induced for either cued recall (of targets given cues) or free recall (of targets only) across 4 study–test cycles of the same test format, followed by a final critical cycle featuring either the expected or the unexpected test format. For final tests of both cued and free recall, participants who had expected that test format outperformed those who had not. This disordinal interaction, supported by recognition and self-report data, demonstrated not mere differences in effort based on anticipated test difficulty, but rather qualitative and appropriate differences in encoding strategies based on expected task demands. Participants also came to appropriately modulate metacognitive monitoring (Experiment 2) and study-time allocation (Experiment 3) across study–test cycles. Item and associative recognition performance, as well as self-report data, revealed shifts in encoding strategies across trials; these results were used to characterize and evaluate the different strategies that participants employed for cued versus free recall and to assess the optimality of participants’ metacognitive control of encoding strategies. Taken together, these data illustrate a sophisticated form of metacognitive control, in which learners qualitatively shift encoding strategies to match the demands of anticipated tests. PMID:22103783

  14. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy

    PubMed Central

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós

    2014-01-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  15. Capacity and optimal collusion attack channels for Gaussian fingerprinting games

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Moulin, Pierre

    2007-02-01

    In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.

  16. A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).

  17. Did recent world record marathon runners employ optimal pacing strategies?

    PubMed

    Angus, Simon D

    2014-01-01

    We apply statistical analysis of high frequency (1 km) split data for the most recent two world-record marathon runs: Run 1 (2:03:59, 28 September 2008) and Run 2 (2:03:38, 25 September 2011). Based on studies in the endurance cycling literature, we develop two principles to approximate 'optimal' pacing in the field marathon. By utilising GPS and weather data, we test, and then de-trend, for each athlete's field response to gradient and headwind on course, recovering standardised proxies for power-based pacing traces. The resultant traces were analysed to ascertain if either runner followed optimal pacing principles; and characterise any deviations from optimality. Whereas gradient was insignificant, headwind was a significant factor in running speed variability for both runners, with Runner 2 targeting the (optimal) parallel variation principle, whilst Runner 1 did not. After adjusting for these responses, neither runner followed the (optimal) 'even' power pacing principle, with Runner 2's macro-pacing strategy fitting a sinusoidal oscillator with exponentially expanding envelope whilst Runner 1 followed a U-shaped, quadratic form. The study suggests that: (a) better pacing strategy could provide elite marathon runners with an economical pathway to significant performance improvements at world-record level; and (b) the data and analysis herein is consistent with a complex-adaptive model of power regulation.

  18. Real time groove characterization combining partial least squares and SVR strategies: application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.

    2017-10-01

    A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.

  19. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization.

    PubMed

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Evaluation of the selection methods used in the exIWO algorithm based on the optimization of multidimensional functions

    NASA Astrophysics Data System (ADS)

    Kostrzewa, Daniel; Josiński, Henryk

    2016-06-01

    The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.

  1. Cost-Effectiveness of Antibiotic Prophylaxis Strategies for Transrectal Prostate Biopsy in an Era of Increasing Antimicrobial Resistance.

    PubMed

    Lee, Kyueun; Drekonja, Dimitri M; Enns, Eva A

    2018-03-01

    To determine the optimal antibiotic prophylaxis strategy for transrectal prostate biopsy (TRPB) as a function of the local antibiotic resistance profile. We developed a decision-analytic model to assess the cost-effectiveness of four antibiotic prophylaxis strategies: ciprofloxacin alone, ceftriaxone alone, ciprofloxacin and ceftriaxone in combination, and directed prophylaxis selection based on susceptibility testing. We used a payer's perspective and estimated the health care costs and quality-adjusted life-years (QALYs) associated with each strategy for a cohort of 66-year-old men undergoing TRPB. Costs and benefits were discounted at 3% annually. Base-case resistance prevalence was 29% to ciprofloxacin and 7% to ceftriaxone, reflecting susceptibility patterns observed at the Minneapolis Veterans Affairs Health Care System. Resistance levels were varied in sensitivity analysis. In the base case, single-agent prophylaxis strategies were dominated. Directed prophylaxis strategy was the optimal strategy at a willingness-to-pay threshold of $50,000/QALY gained. Relative to the directed prophylaxis strategy, the incremental cost-effectiveness ratio of the combination strategy was $123,333/QALY gained over the lifetime time horizon. In sensitivity analysis, single-agent prophylaxis strategies were preferred only at extreme levels of resistance. Directed or combination prophylaxis strategies were optimal for a wide range of resistance levels. Facilities using single-agent antibiotic prophylaxis strategies before TRPB should re-evaluate their strategies unless extremely low levels of antimicrobial resistance are documented. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. Health-related quality of life, optimism, and coping strategies in persons suffering from localized scleroderma.

    PubMed

    Szramka-Pawlak, B; Dańczak-Pazdrowska, A; Rzepa, T; Szewczyk, A; Sadowska-Przytocka, A; Żaba, R

    2013-01-01

    The clinical course of localized scleroderma may consist of bodily deformations, and bodily functions may also be affected. Additionally, the secondary lesions, such as discoloration, contractures, and atrophy, are unlikely to regress. The aforementioned symptoms and functional disturbances may decrease one's quality of life (QoL). Although much has been mentioned in the medical literature regarding QoL in persons suffering from dermatologic diseases, no data specifically describing patients with localized scleroderma exist. The aim of the study was to explore QoL in localized scleroderma patients and to examine their coping strategies in regard to optimism and QoL. The study included 41 patients with localized scleroderma. QoL was evaluated using the SKINDEX questionnaire, and levels of dispositional optimism were assessed using the Life Orientation Test-Revised. In addition, individual coping strategy was determined using the Mini-MAC scale and physical condition was assessed using the Localized Scleroderma Severity Index. The mean QoL score amounted to 51.10 points, with mean scores for individual components as follows: symptoms = 13.49 points, emotions = 21.29 points, and functioning = 16.32 points. A relationship was detected between QoL and the level of dispositional optimism as well as with coping strategies known as anxious preoccupation and helplessness-hopelessness. Higher levels of optimism predicted a higher general QoL. In turn, greater intensity of anxious preoccupied and helpless-hopeless behaviors predicted a lower QoL. Based on these results, it may be stated that localized scleroderma patients have a relatively high QoL, which is accompanied by optimism as well as a lower frequency of behaviors typical of emotion-focused coping strategies.

  3. Development of an Optimal Controller and Validation Test Stand for Fuel Efficient Engine Operation

    NASA Astrophysics Data System (ADS)

    Rehn, Jack G., III

    There are numerous motivations for improvements in automotive fuel efficiency. As concerns over the environment grow at a rate unmatched by hybrid and electric automotive technologies, the need for reductions in fuel consumed by current road vehicles has never been more present. Studies have shown that a major cause of poor fuel consumption in automobiles is improper driving behavior, which cannot be mitigated by purely technological means. The emergence of autonomous driving technologies has provided an opportunity to alleviate this inefficiency by removing the necessity of a driver. Before autonomous technology can be relied upon to reduce gasoline consumption on a large scale, robust programming strategies must be designed and tested. The goal of this thesis work was to design and deploy an autonomous control algorithm to navigate a four cylinder, gasoline combustion engine through a series of changing load profiles in a manner that prioritizes fuel efficiency. The experimental setup is analogous to a passenger vehicle driving over hilly terrain at highway speeds. The proposed approach accomplishes this using a model-predictive, real-time optimization algorithm that was calibrated to the engine. Performance of the optimal control algorithm was tested on the engine against contemporary cruise control. Results indicate that the "efficient'' strategy achieved one to two percent reductions in total fuel consumed for all load profiles tested. The consumption data gathered also suggests that further improvements could be realized on a different subject engine and using extended models and a slightly modified optimal control approach.

  4. Roadmap to control HBV and HDV epidemics in China

    DOE PAGES

    Goyal, Ashish; Murray, John M.

    2017-04-23

    Hepatitis B virus (HBV) is endemic in China. Almost 10% of HBV infected individuals are also infected with hepatitis D virus (HDV) which has a 5–10 times higher mortality rate than HBV mono-infection. The aim of this manuscript is to devise strategies that can not only control HBV infections but also HDV infections in China under the current health care budget in an optimal manner. Furthermore, using a mathematical model, an annual budget of 10 billion dollars was optimally allocated among five interventions namely, testing and HBV adult vaccination, treatment for mono-infected and dually-infected individuals, second line treatment for HBVmore » mono-infections, and awareness programs. As a result, we determine that the optimal strategy is to test and treat both infections as early as possible while applying awareness programs at full intensity. Under this strategy, an additional 19.8 million HBV, 1.9 million HDV infections and 0.25 million lives will be saved over the next 10 years at a cost-savings of 79 billion dollars than performing no intervention. Introduction of second line treatment does not add a significant economic burden yet prevents 1.4 million new HBV infections and 15,000 new HDV infections. In conclusion, test and treatment programs are highly efficient in reducing HBV and HDV prevalence in the population. Under the current health budget in China, not only test and treat programs but awareness programs and second line treatment can also be implemented that minimizes prevalence and mortality, and maximizes economic benefits.« less

  5. Roadmap to control HBV and HDV epidemics in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goyal, Ashish; Murray, John M.

    Hepatitis B virus (HBV) is endemic in China. Almost 10% of HBV infected individuals are also infected with hepatitis D virus (HDV) which has a 5–10 times higher mortality rate than HBV mono-infection. The aim of this manuscript is to devise strategies that can not only control HBV infections but also HDV infections in China under the current health care budget in an optimal manner. Furthermore, using a mathematical model, an annual budget of 10 billion dollars was optimally allocated among five interventions namely, testing and HBV adult vaccination, treatment for mono-infected and dually-infected individuals, second line treatment for HBVmore » mono-infections, and awareness programs. As a result, we determine that the optimal strategy is to test and treat both infections as early as possible while applying awareness programs at full intensity. Under this strategy, an additional 19.8 million HBV, 1.9 million HDV infections and 0.25 million lives will be saved over the next 10 years at a cost-savings of 79 billion dollars than performing no intervention. Introduction of second line treatment does not add a significant economic burden yet prevents 1.4 million new HBV infections and 15,000 new HDV infections. In conclusion, test and treatment programs are highly efficient in reducing HBV and HDV prevalence in the population. Under the current health budget in China, not only test and treat programs but awareness programs and second line treatment can also be implemented that minimizes prevalence and mortality, and maximizes economic benefits.« less

  6. Seismic waveform inversion best practices: regional, global and exploration test cases

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  7. Fireworks Algorithm with Enhanced Fireworks Interaction.

    PubMed

    Zhang, Bei; Zheng, Yu-Jun; Zhang, Min-Xia; Chen, Sheng-Yong

    2017-01-01

    As a relatively new metaheuristic in swarm intelligence, fireworks algorithm (FWA) has exhibited promising performance on a wide range of optimization problems. This paper aims to improve FWA by enhancing fireworks interaction in three aspects: 1) Developing a new Gaussian mutation operator to make sparks learn from more exemplars; 2) Integrating the regular explosion operator of FWA with the migration operator of biogeography-based optimization (BBO) to increase information sharing; 3) Adopting a new population selection strategy that enables high-quality solutions to have high probabilities of entering the next generation without incurring high computational cost. The combination of the three strategies can significantly enhance fireworks interaction and thus improve solution diversity and suppress premature convergence. Numerical experiments on the CEC 2015 single-objective optimization test problems show the effectiveness of the proposed algorithm. The application to a high-speed train scheduling problem also demonstrates its feasibility in real-world optimization problems.

  8. RHODES-ITMS Tempe field test project : implementation and field testing of RHODES, a real-time traffic adaptive control system

    DOT National Transportation Integrated Search

    2001-09-01

    RHODES is a traffic-adaptive signal control system that optimally controls the traffic that is observed in real time. The RHODES-ITMS Program is the application of the RHODES strategy for the two intersections of a freeway-arterial diamond interchang...

  9. Optimal sampling strategies for detecting zoonotic disease epidemics.

    PubMed

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  10. Testing the limits of optimality: the effect of base rates in the Monty Hall dilemma.

    PubMed

    Herbranson, Walter T; Wang, Shanglun

    2014-03-01

    The Monty Hall dilemma is a probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial selection, one of the nonchosen doors is opened, revealing that it is not a winner, and the player is given the choice of staying with the initial selection or switching to the other remaining door. Pigeons and humans were tested on two variants of the Monty Hall dilemma, in which one of the three doors had either a higher or a lower chance of containing the prize than did the other two options. The optimal strategy in both cases was to initially choose the lowest-probability door available and then switch away from it. Whereas pigeons learned to approximate the optimal strategy, humans failed to do so on both accounts: They did not show a preference for low-probability options, and they did not consistently switch. An analysis of performance over the course of training indicated that pigeons learned to perform a sequence of responses on each trial, and that sequence was one that yielded the highest possible rate of reinforcement. Humans, in contrast, continued to vary their responses throughout the experiment, possibly in search of a more complex strategy that would exceed the maximum possible win rate.

  11. How much detail and accuracy is required in plant growth sub-models to address questions about optimal management strategies in agricultural systems?

    PubMed Central

    Renton, Michael

    2011-01-01

    Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model is well suited to address the question of interest. PMID:22476477

  12. A novel optimal coordinated control strategy for the updated robot system for single port surgery.

    PubMed

    Bai, Weibang; Cao, Qixin; Leng, Chuntao; Cao, Yang; Fujie, Masakatsu G; Pan, Tiewen

    2017-09-01

    Research into robotic systems for single port surgery (SPS) has become widespread around the world in recent years. A new robot arm system for SPS was developed, but its positioning platform and other hardware components were not efficient. Special features of the developed surgical robot system make good teleoperation with safety and efficiency difficult. A robot arm is combined and used as new positioning platform, and the remote center motion is realized by a new method using active motion control. A new mapping strategy based on kinematics computation and a novel optimal coordinated control strategy based on real-time approaching to a defined anthropopathic criterion configuration that is referred to the customary ease state of human arms and especially the configuration of boxers' habitual preparation posture are developed. The hardware components, control architecture, control system, and mapping strategy of the robotic system has been updated. A novel optimal coordinated control strategy is proposed and tested. The new robot system can be more dexterous, intelligent, convenient and safer for preoperative positioning and intraoperative adjustment. The mapping strategy can achieve good following and representation for the slave manipulator arms. And the proposed novel control strategy can enable them to complete tasks with higher maneuverability, lower possibility of self-interference and singularity free while teleoperating. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Optimization Strategies for Sensor and Actuator Placement

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Kincaid, Rex K.

    1999-01-01

    This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.

  14. Multimodal Optimization by Covariance Matrix Self-Adaptation Evolution Strategy with Repelling Subpopulations.

    PubMed

    Ahrari, Ali; Deb, Kalyanmoy; Preuss, Mike

    2017-01-01

    During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem's hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.

  15. Influence of signal processing strategy in auditory abilities.

    PubMed

    Melo, Tatiana Mendes de; Bevilacqua, Maria Cecília; Costa, Orozimbo Alves; Moret, Adriane Lima Mortari

    2013-01-01

    The signal processing strategy is a parameter that may influence the auditory performance of cochlear implant and is important to optimize this parameter to provide better speech perception, especially in difficult listening situations. To evaluate the individual's auditory performance using two different signal processing strategy. Prospective study with 11 prelingually deafened children with open-set speech recognition. A within-subjects design was used to compare performance with standard HiRes and HiRes 120 in three different moments. During test sessions, subject's performance was evaluated by warble-tone sound-field thresholds, speech perception evaluation, in quiet and in noise. In the silence, children S1, S4, S5, S7 showed better performance with the HiRes 120 strategy and children S2, S9, S11 showed better performance with the HiRes strategy. In the noise was also observed that some children performed better using the HiRes 120 strategy and other with HiRes. Not all children presented the same pattern of response to the different strategies used in this study, which reinforces the need to look at optimizing cochlear implant clinical programming.

  16. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  17. Placental alpha-microglobulin-1 and combined traditional diagnostic test: a cost-benefit analysis.

    PubMed

    Echebiri, Nelson C; McDoom, M Maya; Pullen, Jessica A; Aalto, Meaghan M; Patel, Natasha N; Doyle, Nora M

    2015-01-01

    We sought to evaluate if the placental alpha-microglobulin (PAMG)-1 test vs the combined traditional diagnostic test (CTDT) of pooling, nitrazine, and ferning would be a cost-beneficial screening strategy in the setting of potential preterm premature rupture of membranes. A decision analysis model was used to estimate the economic impact of PAMG-1 test vs the CTDT on preterm delivery costs from a societal perspective. Our primary outcome was the annual net cost-benefit per person tested. Baseline probabilities and costs assumptions were derived from published literature. We conducted sensitivity analyses using both deterministic and probabilistic models. Cost estimates reflect 2013 US dollars. Annual net benefit from PAMG-1 was $20,014 per person tested, while CTDT had a net benefit of $15,757 per person tested. If the probability of rupture is <38%, PAMG-1 will be cost-beneficial with an annual net benefit of $16,000-37,000 per person tested, while CTDT will have an annual net benefit of $16,000-19,500 per person tested. If the probability of rupture is >38%, CTDT is more cost-beneficial. Monte Carlo simulations of 1 million trials selected PAMG-1 as the optimal strategy with a frequency of 89%, while CTDT was only selected as the optimal strategy with a frequency of 11%. Sensitivity analyses were robust. Our cost-benefit analysis provides the economic evidence for the adoption of PAMG-1 in diagnosing preterm premature rupture of membranes in uncertain presentations and when CTDT is equivocal at 34 to <37 weeks' gestation. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Decision Modeling in Sleep Apnea: The Critical Roles of Pretest Probability, Cost of Untreated Obstructive Sleep Apnea, and Time Horizon.

    PubMed

    Moro, Marilyn; Westover, M Brandon; Kelly, Jessica; Bianchi, Matt T

    2016-03-01

    Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. © 2016 American Academy of Sleep Medicine.

  19. Nice or effective? Social problem solving strategies in patients with major depressive disorder.

    PubMed

    Thoma, Patrizia; Schmidt, Tobias; Juckel, Georg; Norra, Christine; Suchan, Boris

    2015-08-30

    Our study addressed distinct aspects of social problem solving in 28 hospitalized patients with Major Depressive Disorder (MDD) and 28 matched healthy controls. Three scenario-based tests assessed the ability to infer the mental states of story characters in difficult interpersonal situations, the capacity to freely generate good strategies for dealing with such situations and the ability to identify the best solutions among less optimal alternatives. Also, standard tests assessing attention, memory, executive function and trait empathy were administered. Compared to controls, MDD patients showed impaired interpretation of other peoples' sarcastic remarks but not of the mental states underlying other peoples' actions. Furthermore, MDD patients generated fewer strategies that were socially sensitive and practically effective at the same time or at least only socially sensitive. Overall, while the free generation of adequate strategies for difficult social situations was impaired, recognition of optimal solutions among alternatives was spared in MDD patients. Higher generation scores were associated with higher trait empathy and cognitive flexibility scores. We suggest that this specific pattern of impairments ought to be considered in the development of therapies addressing impaired social skills in MDD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Computer-aided diagnostic strategy selection.

    PubMed

    Greenes, R A

    1986-03-01

    Determination of the optimal diagnostic work-up strategy for the patient is becoming a major concern for the practicing physician. Overlap of the indications for various diagnostic procedures, differences in their invasiveness or risk, and high costs have made physicians aware of the need to consider the choice of procedure carefully, as well as its relation to management actions available. In this article, the author discusses research approaches that aim toward development of formal decision analytic methods to allow the physician to determine optimal strategy; clinical algorithms or rules as guides to physician decisions; improved measures for characterizing the performance of diagnostic tests; educational tools for increasing the familiarity of physicians with the concepts underlying these measures and analytic procedures; and computer-based aids for facilitating the employment of these resources in actual clinical practice.

  1. Simple summation rule for optimal fixation selection in visual search.

    PubMed

    Najemnik, Jiri; Geisler, Wilson S

    2009-06-01

    When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.

  2. Multi-Objective Bidding Strategy for Genco Using Non-Dominated Sorting Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Saksinchai, Apinat; Boonchuay, Chanwit; Ongsakul, Weerakorn

    2010-06-01

    This paper proposes a multi-objective bidding strategy for a generation company (GenCo) in uniform price spot market using non-dominated sorting particle swarm optimization (NSPSO). Instead of using a tradeoff technique, NSPSO is introduced to solve the multi-objective strategic bidding problem considering expected profit maximization and risk (profit variation) minimization. Monte Carlo simulation is employed to simulate rivals' bidding behavior. Test results indicate that the proposed approach can provide the efficient non-dominated solution front effectively. In addition, it can be used as a decision making tool for a GenCo compromising between expected profit and price risk in spot market.

  3. Optimal generator bidding strategies for power and ancillary services

    NASA Astrophysics Data System (ADS)

    Morinec, Allen G.

    As the electric power industry transitions to a deregulated market, power transactions are made upon price rather than cost. Generator companies are interested in maximizing their profits rather than overall system efficiency. A method to equitably compensate generation providers for real power, and ancillary services such as reactive power and spinning reserve, will ensure a competitive market with an adequate number of suppliers. Optimizing the generation product mix during bidding is necessary to maximize a generator company's profits. The objective of this research work is to determine and formulate appropriate optimal bidding strategies for a generation company in both the energy and ancillary services markets. These strategies should incorporate the capability curves of their generators as constraints to define the optimal product mix and price offered in the day-ahead and real time spot markets. In order to achieve such a goal, a two-player model was composed to simulate market auctions for power generation. A dynamic game methodology was developed to identify Nash Equilibria and Mixed-Strategy Nash Equilibria solutions as optimal generation bidding strategies for two-player non-cooperative variable-sum matrix games with incomplete information. These games integrated the generation product mix of real power, reactive power, and spinning reserve with the generators's capability curves as constraints. The research includes simulations of market auctions, where strategies were tested for generators with different unit constraints, costs, types of competitors, strategies, and demand levels. Studies on the capability of large hydrogen cooled synchronous generators were utilized to derive useful equations that define the exact shape of the capability curve from the intersections of the arcs defined by the centers and radial vectors of the rotor, stator, and steady-state stability limits. The available reactive reserve and spinning reserve were calculated given a generator operating point in the P-Q plane. Four computer programs were developed to automatically perform the market auction simulations using the equal incremental cost rule. The software calculates the payoffs for the two competing competitors, dispatches six generators, and allocates ancillary services for 64 combinations of bidding strategies, three levels of system demand, and three different types of competitors. Matrix Game theory was utilized to calculate Nash Equilibrium solutions and mixed-strategy Nash solutions as the optimal generator bidding strategies. A method to incorporate ancillary services into the generation bidding strategy, to assure an adequate supply of ancillary services, and to allocate these necessary resources to the on-line units was devised. The optimal generator bid strategy in a power auction was shown to be the Nash Equilibrium solution found in two-player variable-sum matrix games.

  4. Optimized breeding strategies for multiple trait integration: II. Process efficiency in event pyramiding and trait fixation.

    PubMed

    Peng, Ting; Sun, Xiaochun; Mumm, Rita H

    2014-01-01

    Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity as well as seed chipping and tissue sampling approaches to facilitate genotyping. With selfing approaches, two generations of selfing rather than one for trait fixation (i.e. 'F2 enrichment' as per Bonnett et al. in Strategies for efficient implementation of molecular markers in wheat breeding. Mol Breed 15:75-85, 2005) were utilized to eliminate bottlenecking due to extremely low frequencies of desired genotypes in the population. The efficiency indicators such as total number of plants grown across generations, total number of marker data points, total number of generations, number of seeds sampled by seed chipping, number of plants requiring tissue sampling, and number of pollinations (i.e. selfing and crossing) were considered in comparisons of breeding strategies. A breeding strategy involving seed chipping and a two-generation selfing approach (SC + SELF) was determined to be the most efficient breeding strategy in terms of time to market and resource requirements. Doubled haploidy may have limited utility in trait fixation for MTI under the defined breeding scenario. This outcome paves the way for optimizing the last step in the MTI process, version testing, which involves hybridization of female and male RP conversions to create versions of the converted hybrid for performance evaluation and possible commercial release.

  5. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  6. Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.

    2015-03-01

    A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.

  7. Malaria diagnosis and treatment under the strategy of the integrated management of childhood illness (IMCI): relevance of laboratory support from the rapid immunochromatographic tests of ICT Malaria P.f/P.v and OptiMal.

    PubMed

    Tarimo, D S; Minjas, J N; Bygbjerg, I C

    2001-07-01

    The algorithm developed for the integrated management of childhood illness (IMCI) provides guidelines for the treatment of paediatric malaria. In areas where malaria is endemic, for example, the IMCI strategy may indicate that children who present with fever, a recent history of fever and/or pallor should receive antimalarial chemotherapy. In many holo-endemic areas, it is unclear whether laboratory tests to confirm that such signs are the result of malaria would be very relevant or useful. Children from a holo-endemic region of Tanzania were therefore checked for malarial parasites by microscopy and by using two rapid immunochromatographic tests (RIT) for the diagnosis of malaria (ICT Malaria P.f/P.v and OptiMal. At the time they were tested, each of these children had been targeted for antimalarial treatment (following the IMCI strategy) because of fever and/or pallor. Only 70% of the 395 children classified to receive antimalarial drugs by the IMCI algorithm had malarial parasitaemias (68.4% had Plasmodium falciparum trophozoites, 1.3% only P. falciparum gametocytes, 0.3% P. ovale and 0.3% P. malariae). As indicators of P. falciparum trophozoites in the peripheral blood, fever had a sensitivity of 93.0% and a specificity of 15.5% whereas pallor had a sensitivity of 72.2% and a specificity of 50.8%. The RIT both had very high corresponding sensitivities (of 100.0% for the ICT and 94.0% for OptiMal) but the specificity of the ICT (74.0%) was significantly lower than that for OptiMal (100.0%). Fever and pallor were significantly associated with the P. falciparum asexual parasitaemias that equalled or exceeded the threshold intensity (2000/microl) that has the optimum sensitivity and specificity for the definition of a malarial episode. Diagnostic likelihood ratios (DLR) showed that a positive result in the OptiMal test (DLR = infinity) was a better indication of malaria than a positive result in the ICT (DLR = 3.85). In fact, OptiMal had diagnostic reliability (0.93) which approached that of an ideal test and, since it only detects live parasites, OptiMal is superior to the ICT in monitoring therapeutic responses. Although the RIT may seem attractive for use in primary health facilities because relatively inexperienced staff can perform them, the high cost of these tests is prohibitive. In holo-endemic areas, use of RIT or microscopical examination of bloodsmears may only be relevant when malaria needs to be excluded as a cause of illness (e.g. prior to treatment with toxic or expensive drugs, or during malaria epidemics). Wherever the effective drugs for the first-line treatment of malaria are cheap (e.g. chloroquine and Fansidar), treatment based on clinical diagnosis alone should prove cost-saving in health facilities without microscopy.

  8. Treatment options for patients with acute myeloid leukemia with a matched sibling donor: a decision analysis.

    PubMed

    Sung, Lillian; Buckstein, Rena; Doyle, John J; Crump, Michael; Detsky, Allan S

    2003-02-01

    The role of allogeneic bone marrow transplantation (BMT) in the consolidation of young adults with acute myeloid leukemia (AML) with matched sibling donors (MSD) is controversial. Although BMT is associated with increased event free survival compared with intensive chemotherapy (CT) consolidation, BMT also is associated with increased treatment-related mortality and likely decreased quality of life and life expectancy in patients who do not develop recurrent disease. The authors used decision analysis to compare three strategies for maximizing quality-adjusted life years (QALYs) in patients with AML in first remission with an MSD: BMT All, BMT None (consolidation CT only), or BMT in high-risk patients, as defined by baseline cytogenetic testing (Test strategy). A second decision-analysis tree was then constructed that compared BMT with CT specifically for patients with intermediate cytogenetics. Using expected QALYs as the outcome measure, the Test, BMT All, and BMT None strategies were associated with 20.10 QALYs, 19.63 QALYs, and 18.38 QALYs, respectively. Thus, the Test strategy, with CT for low-risk patients and BMT for intermediate risk and high-risk patients, was expected to be the optimal strategy. In the intermediate cytogenetic decision analysis, although the expected QALY for BMT recipients was higher compared with CT recipients (19.78 QALYs vs. 18.75 QALYs), because of uncertainty in variable estimates, the optimal choice was less clear. CT consolidation is a reasonable option for patients with AML who have favorable cytogenetics, even if an MSD is available. This model provides a framework from which patients with AML and their physicians can make decisions about consolidation therapy. Copyright 2003 American Cancer Society.DOI 10.1002/cncr.11098

  9. Simulation based optimized beam velocity in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Vignat, Frédéric; Béraud, Nicolas; Villeneuve, François

    2017-08-01

    Manufacturing good parts with additive technologies rely on melt pool dimension and temperature and are controlled by manufacturing strategies often decided on machine side. Strategies are built on beam path and variable energy input. Beam path are often a mix of contour and hatching strategies filling the contours at each slice. Energy input depend on beam intensity and speed and is determined from simple thermal models to control melt pool dimensions and temperature and ensure porosity free material. These models take into account variation in thermal environment such as overhanging surfaces or back and forth hatching path. However not all the situations are correctly handled and precision is limited. This paper proposes new method to determine energy input from full built chamber 3D thermal simulation. Using the results of the simulation, energy is modified to keep melt pool temperature in a predetermined range. The paper present first an experimental method to determine the optimal range of temperature. In a second part the method to optimize the beam speed from the simulation results is presented. Finally, the optimized beam path is tested in the EBM machine and built part are compared with part built with ordinary beam path.

  10. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.

    PubMed

    Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.

  11. Optimal CINAHL search strategies for identifying therapy studies and review articles.

    PubMed

    Wong, Sharon S L; Wilczynski, Nancy L; Haynes, R Brian

    2006-01-01

    To design optimal search strategies for locating sound therapy studies and review articles in CiNAHL in the year 2000. An analytic survey was conducted, comparing hand searches of 75 journals with retrievals from CINAHL for 5,020 candidate search terms and 17,900 combinations for therapy and 5,977 combinations for review articles. All articles were rated with purpose and quality indicators. Candidate search strategies were used in CINAHL, and the retrievals were compared with results of the hand searches. The proposed search strategies were treated as "diagnostic tests" for sound studies and the manual review of the literature was treated as the "gold standard." Operating characteristics of the search strategies were calculated. Of the 1,383 articles about treatment, 506 (36.6%) met basic criteria for scientific merit and 127 (17.9%) of the 711 articles classified as a review met the criteria for systematic reviews. For locating sound treatment studies, a three-term strategy maximized sensitivity at 99.4% but with compromised specificity at 58.3%, and a two-term strategy maximized specificity at 98.5% but with compromised sensitivity at 52.0%. For detecting systematic reviews, a three-term strategy maximized sensitivity at 91.3% while keeping specificity high at 95.4%, and a single-term strategy maximized specificity at 99.6% but with compromised sensitivity at 42.5%. Three-term search strategies optimizing sensitivity and specificity achieved these values over 91% for detecting sound treatment studies and over 76% for detecting systematic reviews. Search strategies combining indexing terms and text words can achieve high sensitivity and specificity for retrieving sound treatment studies and review articles in CINAHL.

  12. Scanning laser ophthalmoscopy: optimized testing strategies for psychophysics

    NASA Astrophysics Data System (ADS)

    Van de Velde, Frans J.

    1996-12-01

    Retinal function can be evaluated with the scanning laser ophthalmoscope (SLO). the main advantage is a precise localization of the psychophysical stimulus on the retina. Four alternative forced choice (4AFC) and parameter estimation by sequential testing (PEST) are classic adaptive algorithms that have been optimized for use with the SLO, and combined with strategies to correct for small eye movements. Efficient calibration procedures are essential for quantitative microperimetry. These techniques measure precisely visual acuity and retinal sensitivity at distinct locations on the retina. A combined 632 nm and IR Maxwellian view illumination provides a maximal transmittance through the ocular media and has a animal interference with xanthophyll or hemoglobin. Future modifications of the instrument include the possibility of binocular evaluation, Maxwellian view control, fundus tracking using normalized gray-scale correlation, and microphotocoagulation. The techniques are useful in low vision rehabilitation and the application of laser to the retina.

  13. Quantum chi-squared and goodness of fit testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temme, Kristan; Verstraete, Frank

    2015-01-15

    A quantum mechanical hypothesis test is presented for the hypothesis that a certain setup produces a given quantum state. Although the classical and the quantum problems are very much related to each other, the quantum problem is much richer due to the additional optimization over the measurement basis. A goodness of fit test for i.i.d quantum states is developed and a max-min characterization for the optimal measurement is introduced. We find the quantum measurement which leads both to the maximal Pitman and Bahadur efficiencies, and determine the associated divergence rates. We discuss the relationship of the quantum goodness of fitmore » test to the problem of estimating multiple parameters from a density matrix. These problems are found to be closely related and we show that the largest error of an optimal strategy, determined by the smallest eigenvalue of the Fisher information matrix, is given by the divergence rate of the goodness of fit test.« less

  14. On the design of innovative heterogeneous tests using a shape optimization approach

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    The development of full-field measurement methods enabled a new trend of mechanical tests. By providing the inhomogeneous strain field from the tests, these techniques are being widely used in sheet metal identification strategies, through heterogeneous mechanical tests. In this work, a heterogeneous mechanical test with an innovative tool/specimen shape, capable of producing rich heterogeneous strain paths providing extensive information on material behavior, is aimed. The specimen is found using a shape optimization process where a dedicated indicator that evaluates the richness of strain information is used. The methodology and results here presented are extended to non-specimen geometry dependence and to the non-dependence of the geometry parametrization through the use of the Ritz method for boundary value problems. Different curve models, such as Splines, B-Splines and NURBS, are used and C1 continuity throughout the specimen is guaranteed. Moreover, various optimization methods are used, deterministic and stochastic, in order to find the method or a combination of methods able to effectively minimize the cost function.

  15. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  16. Optimizing DER Participation in Inertial and Primary-Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop

    This paper develops an approach to enable the optimal participation of distributed energy resources (DERs) in inertial and primary-frequency response alongside conventional synchronous generators. Leveraging a reduced-order model description of frequency dynamics, DERs' synthetic inertias and droop coefficients are designed to meet time-domain performance objectives of frequency overshoot and steady-state regulation. Furthermore, an optimization-based method centered around classical economic dispatch is developed to ensure that DERs share the power injections for inertial- and primary-frequency response in proportion to their power ratings. Simulations for a modified New England test-case system composed of ten synchronous generators and six instances of the IEEEmore » 37-node test feeder with frequency-responsive DERs validate the design strategy.« less

  17. Control of African swine fever epidemics in industrialized swine populations.

    PubMed

    Halasa, Tariq; Bøtner, Anette; Mortensen, Sten; Christensen, Hanne; Toft, Nils; Boklund, Anette

    2016-12-25

    African swine fever (ASF) is a notifiable infectious disease with a high impact on swine health. The disease is endemic in certain regions in the Baltic countries and has spread to Poland constituting a risk of ASF spread toward Western Europe. Therefore, as part of contingency planning, it is important to explore strategies that can effectively control an epidemic of ASF. In this study, the epidemiological and economic effects of strategies to control the spread of ASF between domestic swine herds were examined using a published model (DTU-DADS-ASF). The control strategies were the basic EU and national strategy (Basic), the basic strategy plus pre-emptive depopulation of neighboring swine herds, and intensive surveillance of herds in the control zones, including testing live or dead animals. Virus spread via wild boar was not modelled. Under the basic control strategy, the median epidemic duration was predicted to be 21days (5th and 95th percentiles; 1-55days), the median number of infected herds was predicted to be 3 herds (1-8), and the total costs were predicted to be €326 million (€256-€442 million). Adding pre-emptive depopulation or intensive surveillance by testing live animals resulted in marginal improvements to the control of the epidemics. However, adding testing of dead animals in the protection and surveillance zones was predicted to be the optimal control scenario for an ASF epidemic in industrialized swine populations without contact to wild boar. This optimal scenario reduced the epidemic duration to 9days (1-38) and the total costs to €294 million (€257-€392 million). Export losses were the driving force of the total costs of the epidemics. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.

    PubMed

    Jung, Sang-Kyu; McDonald, Karen

    2011-08-16

    Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.

  19. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization

    PubMed Central

    2011-01-01

    Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353

  20. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  1. Economic Analysis of Screening Strategies for Rupture of Silicone Gel Breast Implants

    PubMed Central

    Chung, Kevin C.; Malay, Sunitha; Shauver, Melissa J.; Kim, H. Myra

    2012-01-01

    Background In 2006, the U.S. Food and Drug Administration (FDA) recommended screening of all women with silicone gel breast implants with magnetic resonance imaging (MRI) three years after implantation and every two years thereafter to assess their integrity. The cost for these serial examinations over the lifetime of the breast implants is an added burden to insurance payers and to women. We perform an economic analysis to determine the most optimal screening strategies by considering the diagnostic accuracy of the screening tests, the costs of the tests and subsequent implant removal. Methods We determined aggregate/pooled values for sensitivity and specificity of the screening tests ultrasound (US) and MRI in detecting silicone breast implant ruptures from the data obtained from published literature. We compiled costs, based on Medicare reimbursements for 2011, for the following elements: imaging modalities, anesthesia and 3 surgical treatment options for detected ruptures. We used decision tree to compare three alternate screening strategies of US only, MRI only and US followed by MRI in asymptomatic and symptomatic women. Results The cost per rupture of screening and management of rupture with US in asymptomatic women was $1,090, whereas in symptomatic women it was $1,622. Similar cost for MRI in asymptomatic women was $2,067, whereas in symptomatic women it was $2,143. Similar cost for US followed by MRI in asymptomatic women was $637, whereas in symptomatic women it was $2,908. Conclusion Screening with US followed by MRI was optimal for asymptomatic women and screening with US was optimal for symptomatic women. PMID:22743887

  2. Empowering leadership and job crafting: The role of employee optimism.

    PubMed

    Thun, Sylvi; Bakker, Arnold B

    2018-06-08

    The objective of this study was to test the relationship between empowering leadership and job crafting and to examine the moderating role of optimism as a personal resource. We hypothesized that the association between empowering leadership and job crafting would be stronger for employees with high (vs. low) levels of optimism. A total of 331 Norwegian workers from a variety of occupations participated in our study. Results of structural equation modelling analysis generally supported our hypotheses. Empowering leadership was positively related to 3 of the 4 job crafting strategies investigated (increasing structural job resources, increasing social job resources, and increasing challenging job demands; but not reducing hindrance job demands). Moreover, as hypothesized, optimism strengthened the empowering leadership-job crafting relationship for increasing structural resources and increasing challenging demands. The results suggest that empowering leadership is an important antecedent of job crafting strategies, except for reducing hindrance demands. The implications of these findings are discussed. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Test and treat DC: forecasting the impact of a comprehensive HIV strategy in Washington DC.

    PubMed

    Walensky, Rochelle P; Paltiel, A David; Losina, Elena; Morris, Bethany L; Scott, Callie A; Rhode, Erin R; Seage, George R; Freedberg, Kenneth A

    2010-08-15

    The United States and international agencies have signaled their commitment to containing the human immunodeficiency virus (HIV) epidemic via early case identification and linkage to antiretroviral therapy (ART) immediately at diagnosis. We forecast outcomes of this approach if implemented in Washington DC. Using a mathematical model of HIV case detection and treatment, we evaluated combinations of HIV screening and ART initiation strategies. We define current practice as no regular screening program and ART at CD4 counts < or = 350 cells/microL, and we define test and treat as annual screening and administration of ART at diagnosis. Outcomes include life expectancy of HIV-infected persons and changes in the population time with transmissible HIV RNA levels. Data, largely from Washington DC, include undiagnosed HIV prevalence of 0.6%, annual incidence of 0.13%, 31% rate of test offer, 60% rate of acceptance, and 50% linkage to care. Input parameters, including optimized ART efficacy, are varied in sensitivity analyses. Projected life expectancies, from an initial mean age of 41 years, are 23.9, 25.0, and 25.6 years for current practice, test and treat, and test and treat with optimized ART, respectively. Compared with current practice, test and treat leads to a 14.7% reduction in time spent with transmissible HIV RNA level in the next 5 years; test and treat with optimized ART results in a 27.3% reduction. An expanded HIV test and treat program in Washington DC will increase life expectancy of HIV-infected patients but will have a modest impact on HIV transmission over the next 5 years and is unlikely to halt the HIV epidemic.

  4. Costs, equity, efficiency and feasibility of identifying the poor in Ghana's National Health Insurance Scheme: empirical analysis of various strategies.

    PubMed

    Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob

    2012-01-01

    To analyse the costs and evaluate the equity, efficiency and feasibility of four strategies to identify poor households for premium exemptions in Ghana's National Health Insurance Scheme (NHIS): means testing (MT), proxy means testing (PMT), participatory wealth ranking (PWR) and geographic targeting (GT) in urban, rural and semi-urban settings in Ghana. We conducted the study in 145-147 households per setting with MT as our gold standard strategy. We estimated total costs that included costs of household surveys and cost of premiums paid to the poor, efficiency (cost per poor person identified), equity (number of true poor excluded) and the administrative feasibility of implementation. The cost of exempting one poor individual ranged from US$15.87 to US$95.44; exclusion of the poor ranged between 0% and 73%. MT was most efficient and equitable in rural and urban settings with low-poverty incidence; GT was efficient and equitable in the semi-urban setting with high-poverty incidence. PMT and PWR were less equitable and inefficient although feasible in some settings. We recommend MT as optimal strategy in low-poverty urban and rural settings and GT as optimal strategy in high-poverty semi-urban setting. The study is relevant to other social and developmental programmes that require identification and exemptions of the poor in low-income countries. © 2011 Blackwell Publishing Ltd.

  5. Optimism, coping and long-term recovery from coronary artery surgery in women.

    PubMed

    King, K B; Rowe, M A; Kimble, L P; Zerwic, J J

    1998-02-01

    Optimism, coping strategies, and psychological and functional outcomes were measured in 55 women undergoing coronary artery surgery. Data were collected in-hospital and at 1, 6, and 12 months after surgery. Optimism was related to positive moods and life satisfaction, and inversely related to negative moods. Few relationships were found between optimism and functional ability. Cognitive coping strategies accounted for a mediating effect between optimism and negative mood. Optimists were more likely to accept their situation, and less likely to use escapism. In turn, these coping strategies were inversely related to negative mood and mediated the relationship between optimism and this outcome. Optimism was not related to problem-focused coping strategies; this, these coping strategies cannot explain the relationship between optimism and outcomes.

  6. Power plant maintenance scheduling using ant colony optimization: an improved formulation

    NASA Astrophysics Data System (ADS)

    Foong, Wai Kuan; Maier, Holger; Simpson, Angus

    2008-04-01

    It is common practice in the hydropower industry to either shorten the maintenance duration or to postpone maintenance tasks in a hydropower system when there is expected unserved energy based on current water storage levels and forecast storage inflows. It is therefore essential that a maintenance scheduling optimizer can incorporate the options of shortening the maintenance duration and/or deferring maintenance tasks in the search for practical maintenance schedules. In this article, an improved ant colony optimization-power plant maintenance scheduling optimization (ACO-PPMSO) formulation that considers such options in the optimization process is introduced. As a result, both the optimum commencement time and the optimum outage duration are determined for each of the maintenance tasks that need to be scheduled. In addition, a local search strategy is presented in this article to boost the robustness of the algorithm. When tested on a five-station hydropower system problem, the improved formulation is shown to be capable of allowing shortening of maintenance duration in the event of expected demand shortfalls. In addition, the new local search strategy is also shown to have significantly improved the optimization ability of the ACO-PPMSO algorithm.

  7. Optimising reversed-phase liquid chromatographic separation of an acidic mixture on a monolithic stationary phase with the aid of response surface methodology and experimental design.

    PubMed

    Wang, Y; Harrison, M; Clark, B J

    2006-02-10

    An optimization strategy for the separation of an acidic mixture by employing a monolithic stationary phase is presented, with the aid of experimental design and response surface methodology (RSM). An orthogonal array design (OAD) OA(16) (2(15)) was used to choose the significant parameters for the optimization. The significant factors were optimized by using a central composite design (CCD) and the quadratic models between the dependent and the independent parameters were built. The mathematical models were tested on a number of simulated data set and had a coefficient of R(2) > 0.97 (n = 16). On applying the optimization strategy, the factor effects were visualized as three-dimensional (3D) response surfaces and contour plots. The optimal condition was achieved in less than 40 min by using the monolithic packing with the mobile phase of methanol/20 mM phosphate buffer pH 2.7 (25.5/74.5, v/v). The method showed good agreement between the experimental data and predictive value throughout the studied parameter space and were suitable for optimization studies on the monolithic stationary phase for acidic compounds.

  8. A reduced energy supply strategy in active vibration control

    NASA Astrophysics Data System (ADS)

    Ichchou, M. N.; Loukil, T.; Bareille, O.; Chamberland, G.; Qiu, J.

    2011-12-01

    In this paper, a control strategy is presented and numerically tested. This strategy aims to achieve the potential performance of fully active systems with a reduced energy supply. These energy needs are expected to be comparable to the power demands of semi-active systems, while system performance is intended to be comparable to that of a fully active configuration. The underlying strategy is called 'global semi-active control'. This control approach results from an energy investigation based on management of the optimal control process. Energy management encompasses storage and convenient restitution. The proposed strategy monitors a given active law without any external energy supply by considering purely dissipative and energy-demanding phases. Such a control law is offered here along with an analysis of its properties. A suboptimal form, well adapted for practical implementation steps, is also given. Moreover, a number of numerical experiments are proposed in order to validate test findings.

  9. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  10. Evaluating Monitoring Strategies to Detect Precipitation-Induced Microbial Contamination Events in Karstic Springs Used for Drinking Water

    PubMed Central

    Besmer, Michael D.; Hammes, Frederik; Sigrist, Jürg A.; Ort, Christoph

    2017-01-01

    Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies. PMID:29213255

  11. Evaluating Monitoring Strategies to Detect Precipitation-Induced Microbial Contamination Events in Karstic Springs Used for Drinking Water.

    PubMed

    Besmer, Michael D; Hammes, Frederik; Sigrist, Jürg A; Ort, Christoph

    2017-01-01

    Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies.

  12. Testing of Strategies for the Acceleration of the Cost Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, Roberto; Vilim, Richard B.

    The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) andmore » the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal one. The component physical dynamics are represented through suitable ramp constraints, which considerably simplify the numerical solving. In order to test the capabilities of the proposed approach, in the present report, the dispatch problem only is tackled, i.e. a reference unit configuration is assumed, and each one of the N-R HES unit components is assumed to have a fixed installed capacity. As for the next steps, the main improvement will concern the operation strategy of the ES facility. In particular, in order to describe a more realistic battery commitment strategy, the ES operation will be regulated according to the electricity price forecasts.« less

  13. Optimality and stability of symmetric evolutionary games with applications in genetic selection.

    PubMed

    Huang, Yuanyuan; Hao, Yiping; Wang, Min; Zhou, Wen; Wu, Zhijun

    2015-06-01

    Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.

  14. Optimization based on benefit of regional energy suppliers of distributed generation in active distribution network

    NASA Astrophysics Data System (ADS)

    Huo, Xianxu; Li, Guodong; Jiang, Ling; Wang, Xudong

    2017-08-01

    With the development of electricity market, distributed generation (DG) technology and related policies, regional energy suppliers are encouraged to build DG. Under this background, the concept of active distribution network (ADN) is put forward. In this paper, a bi-level model of intermittent DG considering benefit of regional energy suppliers is proposed. The objective of the upper level is the maximization of benefit of regional energy suppliers. On this basis, the lower level is optimized for each scene. The uncertainties of DG output and load of users, as well as four active management measures, which include demand-side management, curtailing the output power of DG, regulating reactive power compensation capacity and regulating the on-load tap changer, are considered. Harmony search algorithm and particle swarm optimization are combined as a hybrid strategy to solve the model. This model and strategy are tested with IEEE-33 node system, and results of case study indicate that the model and strategy successfully increase the capacity of DG and benefit of regional energy suppliers.

  15. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  16. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  17. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  18. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  19. SCCT guidelines on radiation dose and dose-optimization strategies in cardiovascular CT

    PubMed Central

    Halliburton, Sandra S.; Abbara, Suhny; Chen, Marcus Y.; Gentry, Ralph; Mahesh, Mahadevappa; Raff, Gilbert L.; Shaw, Leslee J.; Hausleiter, Jörg

    2012-01-01

    Over the last few years, computed tomography (CT) has developed into a standard clinical test for a variety of cardiovascular conditions. The emergence of cardiovascular CT during a period of dramatic increase in radiation exposure to the population from medical procedures and heightened concern about the subsequent potential cancer risk has led to intense scrutiny of the radiation burden of this new technique. This has hastened the development and implementation of dose reduction tools and prompted closer monitoring of patient dose. In an effort to aid the cardiovascular CT community in incorporating patient-centered radiation dose optimization and monitoring strategies into standard practice, the Society of Cardiovascular Computed Tomography has produced a guideline document to review available data and provide recommendations regarding interpretation of radiation dose indices and predictors of risk, appropriate use of scanner acquisition modes and settings, development of algorithms for dose optimization, and establishment of procedures for dose monitoring. PMID:21723512

  20. The time-efficiency principle: time as the key diagnostic strategy in primary care.

    PubMed

    Irving, Greg; Holden, John

    2013-08-01

    The test and retest opportunity afforded by reviewing a patient over time substantially increases the total gain in certainty when making a diagnosis in low-prevalence settings (the time-efficiency principle). This approach safely and efficiently reduces the number of patients who need to be formally tested in order to make a correct diagnosis for a person. Time, in terms of observed disease trajectory, provides a vital mechanism for achieving this task. It remains the best strategy for delivering near-optimal diagnoses in low-prevalence settings and should be used to its full advantage.

  1. Optimal exploitation strategies for an animal population in a stochastic serially correlated environment

    USGS Publications Warehouse

    Anderson, D.R.

    1974-01-01

    Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.

  2. Redundancy allocation problem for k-out-of- n systems with a choice of redundancy strategies

    NASA Astrophysics Data System (ADS)

    Aghaei, Mahsa; Zeinal Hamadani, Ali; Abouei Ardakan, Mostafa

    2017-03-01

    To increase the reliability of a specific system, using redundant components is a common method which is called redundancy allocation problem (RAP). Some of the RAP studies have focused on k-out-of- n systems. However, all of these studies assumed predetermined active or standby strategies for each subsystem. In this paper, for the first time, we propose a k-out-of- n system with a choice of redundancy strategies. Therefore, a k-out-of- n series-parallel system is considered when the redundancy strategy can be chosen for each subsystem. In other words, in the proposed model, the redundancy strategy is considered as an additional decision variable and an exact method based on integer programming is used to obtain the optimal solution of the problem. As the optimization of RAP belongs to the NP-hard class of problems, a modified version of genetic algorithm (GA) is also developed. The exact method and the proposed GA are implemented on a well-known test problem and the results demonstrate the efficiency of the new approach compared with the previous studies.

  3. Identifying optimum performance trade-offs using a cognitively bounded rational analysis model of discretionary task interleaving.

    PubMed

    Janssen, Christian P; Brumby, Duncan P; Dowell, John; Chater, Nick; Howes, Andrew

    2011-01-01

    We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. Copyright © 2011 Cognitive Science Society, Inc.

  4. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  5. A programmable optimization environment using the GAMESS-US and MERLIN/MCL packages. Applications on intermolecular interaction energies

    NASA Astrophysics Data System (ADS)

    Kalatzis, Fanis G.; Papageorgiou, Dimitrios G.; Demetropoulos, Ioannis N.

    2006-09-01

    The Merlin/MCL optimization environment and the GAMESS-US package were combined so as to offer an extended and efficient quantum chemistry optimization system, capable of implementing complex optimization strategies for generic molecular modeling problems. A communication and data exchange interface was established between the two packages exploiting all Merlin features such as multiple optimizers, box constraints, user extensions and a high level programming language. An important feature of the interface is its ability to perform dimer computations by eliminating the basis set superposition error using the counterpoise (CP) method of Boys and Bernardi. Furthermore it offers CP-corrected geometry optimizations using analytic derivatives. The unified optimization environment was applied to construct portions of the intermolecular potential energy surface of the weakly bound H-bonded complex C 6H 6-H 2O by utilizing the high level Merlin Control Language. The H-bonded dimer HF-H 2O was also studied by CP-corrected geometry optimization. The ab initio electronic structure energies were calculated using the 6-31G ** basis set at the Restricted Hartree-Fock and second-order Moller-Plesset levels, while all geometry optimizations were carried out using a quasi-Newton algorithm provided by Merlin. Program summaryTitle of program: MERGAM Catalogue identifier:ADYB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYB_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The program is designed for machines running the UNIX operating system. It has been tested on the following architectures: IA32 (Linux with gcc/g77 v.3.2.3), AMD64 (Linux with the Portland group compilers v.6.0), SUN64 (SunOS 5.8 with the Sun Workshop compilers v.5.2) and SGI64 (IRIX 6.5 with the MIPSpro compilers v.7.4) Installations: University of Ioannina, Greece Operating systems or monitors under which the program has been tested: UNIX Programming language used: ANSI C, ANSI Fortran-77 No. of lines in distributed program, including test data, etc.:11 282 No. of bytes in distributed program, including test data, etc.: 49 458 Distribution format: tar.gz Memory required to execute with typical data: Memory requirements mainly depend on the selection of a GAMESS-US basis set and the number of atoms No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no Nature of physical problem: Multidimensional geometry optimization is of great importance in any ab initio calculation since it usually is one of the most CPU-intensive tasks, especially on large molecular systems. For example, the geometric and energetic description of van der Waals and weakly bound H-bonded complexes requires the construction of related important portions of the multidimensional intermolecular potential energy surface (IPES). So the various held views about the nature of these bonds can be quantitatively tested. Method of solution: The Merlin/MCL optimization environment was interconnected with the GAMESS-US package to facilitate geometry optimization in quantum chemistry problems. The important portions of the IPES require the capability to program optimization strategies. The Merlin/MCL environment was used for the implementation of such strategies. In this work, a CP-corrected geometry optimization was performed on the HF-H 2O complex and an MCL program was developed to study portions of the potential energy surface of the C 6H 6-H 2O complex. Restrictions on the complexity of the problem: The Merlin optimization environment and the GAMESS-US package must be installed. The MERGAM interface requires GAMESS-US input files that have been constructed in Cartesian coordinates. This restriction occurs from a design-time requirement to not allow reorientation of atomic coordinates; this rule holds always true when applying the COORD = UNIQUE keyword in a GAMESS-US input file. Typical running time: It depends on the size of the molecular system, the size of the basis set and the method of electron correlation. Execution of the test run took approximately 5 min on a 2.8 GHz Intel Pentium CPU.

  6. Emergency strategy optimization for the environmental control system in manned spacecraft

    NASA Astrophysics Data System (ADS)

    Li, Guoxiang; Pang, Liping; Liu, Meng; Fang, Yufeng; Zhang, Helin

    2018-02-01

    It is very important for a manned environmental control system (ECS) to be able to reconfigure its operation strategy in emergency conditions. In this article, a multi-objective optimization is established to design the optimal emergency strategy for an ECS in an insufficient power supply condition. The maximum ECS lifetime and the minimum power consumption are chosen as the optimization objectives. Some adjustable key variables are chosen as the optimization variables, which finally represent the reconfigured emergency strategy. The non-dominated sorting genetic algorithm-II is adopted to solve this multi-objective optimization problem. Optimization processes are conducted at four different carbon dioxide partial pressure control levels. The study results show that the Pareto-optimal frontiers obtained from this multi-objective optimization can represent the relationship between the lifetime and the power consumption of the ECS. Hence, the preferred emergency operation strategy can be recommended for situations when there is suddenly insufficient power.

  7. Optimizing participation of children with autism spectrum disorder experiencing sensory challenges: a clinical reasoning framework.

    PubMed

    Ashburner, Jill K; Rodger, Sylvia A; Ziviani, Jenny M; Hinder, Elizabeth A

    2014-02-01

    Remedial sensory interventions currently lack supportive evidence and can be challenging to implement for families and clinicians. It may be timely to shift the focus to optimizing participation of children with autism spectrum disorders (ASD) through accommodation and self-regulation of their sensory differences. A framework to guide practitioners in selecting strategies is proposed based on clinical reasoning considerations, including (a) research evidence, (b) client- and family-centredness, (c) practice contexts, (d) occupation-centredness, and (e) risks. Information-sharing with families and coaching constitute the basis for intervention. Specific strategies are identified where sensory aversions or seeking behaviours, challenges with modulation of arousal, or sensory-related behaviours interfere with participation. Self-regulatory strategies are advocated. The application of universal design principles to shared environments is also recommended. The implications of this framework for future research, education, and practice are discussed. The clinical utility of the framework now needs to be tested.

  8. Decision Modeling in Sleep Apnea: The Critical Roles of Pretest Probability, Cost of Untreated Obstructive Sleep Apnea, and Time Horizon

    PubMed Central

    Moro, Marilyn; Westover, M. Brandon; Kelly, Jessica; Bianchi, Matt T.

    2016-01-01

    Study Objectives: Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. Methods: We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. Results: The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Conclusion: Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. Citation: Moro M, Westover MB, Kelly J, Bianchi MT. Decision modeling in sleep apnea: the critical roles of pretest probability, cost of untreated obstructive sleep apnea, and time horizon. J Clin Sleep Med 2016;12(3):409–418. PMID:26518699

  9. An implementation of differential evolution algorithm for inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Balkaya, Çağlayan

    2013-11-01

    Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.

  10. Optimization through satisficing with prospects

    NASA Astrophysics Data System (ADS)

    Oyo, Kuratomo; Takahashi, Tatsuji

    2017-07-01

    As the broadening scope of reinforcement learning calls for a rational and more efficient heuristics, we test a satisficing strategy named RS, based on the theory of bounded rationality that considers the limited resources in agents. In K-armed bandit problems, despite its simpler form than the previous formalization of satisficing, RS shows better-than-optimal performances when the optimal aspiration level is given. We also show that RS shows a scalability for the number of actions, K, and an adaptability in the face of an infinite number of actions. It may be an efficient means for online learning in a complex or real environments.

  11. Optimizing Oral Bioavailability in Drug Discovery: An Overview of Design and Testing Strategies and Formulation Options.

    PubMed

    Aungst, Bruce J

    2017-04-01

    For discovery teams working toward new, orally administered therapeutic agents, one requirement is to attain adequate systemic exposure after oral dosing, which is best accomplished when oral bioavailability is optimized. This report summarizes the bioavailability challenges currently faced in drug discovery, and the design and testing methods and strategies currently utilized to address the challenges. Profiling of discovery compounds usually includes separate assessments of solubility, permeability, and susceptibility to first-pass metabolism, which are the 3 most likely contributors to incomplete oral bioavailability. An initial assessment of absorption potential may be made computationally, and high throughput in vitro assays are typically performed to prioritize compounds for in vivo studies. The initial pharmacokinetic study is a critical decision point in compound evaluation, and the importance of the effect the dosing vehicle or formulation can have on oral bioavailability, especially for poorly water soluble compounds, is emphasized. Dosing vehicles and bioavailability-enabling formulations that can be used for discovery and preclinical studies are described. Optimizing oral bioavailability within a chemical series or for a lead compound requires identification of the barrier limiting bioavailability, and methods used for this purpose are outlined. Finally, a few key guidelines are offered for consideration when facing the challenges of optimizing oral bioavailability in drug discovery. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  12. A hybrid Q-learning sine-cosine-based strategy for addressing the combinatorial test suite minimization problem

    PubMed Central

    Zamli, Kamal Z.; Din, Fakhrud; Bures, Miroslav

    2018-01-01

    The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level. PMID:29771918

  13. A hybrid Q-learning sine-cosine-based strategy for addressing the combinatorial test suite minimization problem.

    PubMed

    Zamli, Kamal Z; Din, Fakhrud; Ahmed, Bestoun S; Bures, Miroslav

    2018-01-01

    The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level.

  14. Cost-effectiveness of cervical cancer screening in women living with HIV in South Africa: A mathematical modeling study.

    PubMed

    Campos, Nicole G; Lince-Deroche, Naomi; Chibwesha, Carla J; Firnhaber, Cynthia; Smith, Jennifer S; Michelow, Pam; Meyer-Rath, Gesine; Jamieson, Lise; Jordaan, Suzette; Sharma, Monisha; Regan, Catherine; Sy, Stephen; Liu, Gui; Tsu, Vivien; Jeronimo, Jose; Kim, Jane J

    2018-06-15

    Women with HIV face an increased risk of human papillomavirus (HPV) acquisition and persistence, cervical intraepithelial neoplasia, and invasive cervical cancer. Our objective was to determine the cost-effectiveness of different cervical cancer screening strategies among women with HIV in South Africa. We modified a mathematical model of HPV infection and cervical disease to reflect co-infection with HIV. The model was calibrated to epidemiologic data from HIV-infected women in South Africa. Clinical and economic data were drawn from in-country data sources. The model was used to project reductions in the lifetime risk of cervical cancer and incremental cost-effectiveness ratios (ICERs) of Pap and HPV DNA screening and management algorithms beginning at HIV diagnosis, at one-, two-, or three-year intervals. Strategies with an ICER below South Africa's 2016 per capita GDP (US$5,270) were considered 'cost-effective.' HPV testing followed by treatment (test-and-treat) at two-year intervals was the most effective strategy that was also cost-effective, reducing lifetime cancer risk by 56·6% with an ICER of US$3,010 per year of life saved (YLS). Other cost-effective strategies included Pap (referral threshold: HSIL+) at one-, two-, and three-year intervals, and HPV test-and-treat at three-year intervals. Pap (ASCUS+), HPV testing with 16/18 genotyping, and HPV testing with Pap or visual triage of HPV-positive women were less effective and more costly than alternatives. Considering per capita GDP as the benchmark for cost-effectiveness, HPV test-and-treat is optimal in South Africa. At lower cost-effectiveness benchmarks, Pap (HSIL+) would be optimal.This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (CC BY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  15. Optimizing diffusion of an online computer tailored lifestyle program: a study protocol.

    PubMed

    Schneider, Francine; van Osch, Liesbeth A D M; Kremers, Stef P J; Schulz, Daniela N; van Adrichem, Mathieu J G; de Vries, Hein

    2011-06-20

    Although the Internet is a promising medium to offer lifestyle interventions to large amounts of people at relatively low costs and effort, actual exposure rates of these interventions fail to meet the high expectations. Since public health impact of interventions is determined by intervention efficacy and level of exposure to the intervention, it is imperative to put effort in optimal dissemination. The present project attempts to optimize the dissemination process of a new online computer tailored generic lifestyle program by carefully studying the adoption process and developing a strategy to achieve sustained use of the program. A prospective study will be conducted to yield relevant information concerning the adoption process by studying the level of adoption of the program, determinants involved in adoption and characteristics of adopters and non-adopters as well as satisfied and unsatisfied users. Furthermore, a randomized control trial will be conducted to the test the effectiveness of a proactive strategy using periodic e-mail prompts in optimizing sustained use of the new program. Closely mapping the adoption process will gain insight in characteristics of adopters and non-adopters and satisfied and unsatisfied users. This insight can be used to further optimize the program by making it more suitable for a wider range of users, or to develop adjusted interventions to attract subgroups of users that are not reached or satisfied with the initial intervention. Furthermore, by studying the effect of a proactive strategy using period prompts compared to a reactive strategy to stimulate sustained use of the intervention and, possibly, behaviour change, specific recommendations on the use and the application of prompts in online lifestyle interventions can be developed. Dutch Trial Register NTR1786 and Medical Ethics Committee of Maastricht University and the University Hospital Maastricht (NL2723506809/MEC0903016).

  16. Multidimensional optimal droop control for wind resources in DC microgrids

    NASA Astrophysics Data System (ADS)

    Bunker, Kaitlyn J.

    Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.

  17. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    PubMed

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

  18. Study of the Polarization Strategy for Electron Cyclotron Heating Systems on HL-2M

    NASA Astrophysics Data System (ADS)

    Zhang, F.; Huang, M.; Xia, D. H.; Song, S. D.; Wang, J. Q.; Huang, B.; Wang, H.

    2016-06-01

    As important components integrated in transmission lines of electron cyclotron heating systems, polarizers are mainly used to obtain the desired polarization for highly efficient coupling between electron cyclotron waves and plasma. The polarization strategy for 105-GHz electron cyclotron heating systems of HL-2M tokamak is studied in this paper. Considering the polarizers need high efficiency, stability, and low loss to realize any polarization states, two sinusoidal-grooved polarizers, which include a linear polarizer and an elliptical polarizer, are designed with the coordinate transformation method. The parameters, the period p and the depth d, of two sinusoidal-grooved polarizers are optimized by a phase difference analysis method to achieve an almost arbitrary polarization. Finally, the optimized polarizers are manufactured and their polarization characteristics are tested with a low-power test platform. The experimental results agree well with the numerical calculations, indicating that the designed polarizers can meet the polarization requirements of the electron cyclotron heating systems of HL-2M tokamak.

  19. Competition among cooperators: Altruism and reciprocity

    PubMed Central

    Danielson, Peter

    2002-01-01

    Levine argues that neither self-interest nor altruism explains experimental results in bargaining and public goods games. Subjects' preferences appear also to be sensitive to their opponents' perceived altruism. Sethi and Somanathan provide a general account of reciprocal preferences that survive under evolutionary pressure. Although a wide variety of reciprocal strategies pass this evolutionary test, Sethi and Somanthan conjecture that fewer are likely to survive when reciprocal strategies compete with each other. This paper develops evolutionary agent-based models to test their conjecture in cases where reciprocal preferences can differ in a variety of games. We confirm that reciprocity is necessary but not sufficient for optimal cooperation. We explore the theme of competition among reciprocal cooperators and display three interesting emergent organizations: racing to the “moral high ground,” unstable cycles of preference change, and, when we implement reciprocal mechanisms, hierarchies resulting from exploiting fellow cooperators. If reciprocity is a basic mechanism facilitating cooperation, we can expect interaction that evolves around it to be complex, non-optimal, and resistant to change. PMID:12011403

  20. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    NASA Astrophysics Data System (ADS)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  1. A Revised Simplex Method for Test Construction Problems. Research Report 90-5.

    ERIC Educational Resources Information Center

    Adema, Jos J.

    Linear programming models with 0-1 variables are useful for the construction of tests from an item bank. Most solution strategies for these models start with solving the relaxed 0-1 linear programming model, allowing the 0-1 variables to take on values between 0 and 1. Then, a 0-1 solution is found by just rounding, optimal rounding, or a…

  2. Optimizing low impact development (LID) for stormwater runoff treatment in urban area, Korea: Experimental and modeling approach.

    PubMed

    Baek, Sang-Soo; Choi, Dong-Ho; Jung, Jae-Woon; Lee, Hyung-Jin; Lee, Hyuk; Yoon, Kwang-Sik; Cho, Kyung Hwa

    2015-12-01

    Currently, continued urbanization and development result in an increase of impervious areas and surface runoff including pollutants. Also one of the greatest issues in pollutant emissions is the first flush effect (FFE), which implies a greater discharge rate of pollutant mass in the early part in the storm. Low impact development (LID) practices have been mentioned as a promising strategy to control urban stormwater runoff and pollution in the urban ecosystem. However, this requires many experimental and modeling efforts to test LID characteristics and propose an adequate guideline for optimizing LID management. In this study, we propose a novel methodology to optimize the sizes of different types of LID by conducting intensive stormwater monitoring and numerical modeling in a commercial site in Korea. The methodology proposed optimizes LID size in an attempt to moderate FFE on a receiving waterbody. Thereby, the main objective of the optimization is to minimize mass first flush (MFF), which is an indicator for quantifying FFE. The optimal sizes of 6 different LIDs ranged from 1.2 mm to 3.0 mm in terms of runoff depths, which significantly moderate the FFE. We hope that the new proposed methodology can be instructive for establishing LID strategies to mitigate FFE. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Pathway-based predictive approaches for non-animal assessment of acute inhalation toxicity.

    PubMed

    Clippinger, Amy J; Allen, David; Behrsing, Holger; BéruBé, Kelly A; Bolger, Michael B; Casey, Warren; DeLorme, Michael; Gaça, Marianna; Gehen, Sean C; Glover, Kyle; Hayden, Patrick; Hinderliter, Paul; Hotchkiss, Jon A; Iskandar, Anita; Keyser, Brian; Luettich, Karsta; Ma-Hock, Lan; Maione, Anna G; Makena, Patrudu; Melbourne, Jodie; Milchak, Lawrence; Ng, Sheung P; Paini, Alicia; Page, Kathryn; Patlewicz, Grace; Prieto, Pilar; Raabe, Hans; Reinke, Emily N; Roper, Clive; Rose, Jane; Sharma, Monita; Spoo, Wayne; Thorne, Peter S; Wilson, Daniel M; Jarabek, Annie M

    2018-06-20

    New approaches are needed to assess the effects of inhaled substances on human health. These approaches will be based on mechanisms of toxicity, an understanding of dosimetry, and the use of in silico modeling and in vitro test methods. In order to accelerate wider implementation of such approaches, development of adverse outcome pathways (AOPs) can help identify and address gaps in our understanding of relevant parameters for model input and mechanisms, and optimize non-animal approaches that can be used to investigate key events of toxicity. This paper describes the AOPs and the toolbox of in vitro and in silico models that can be used to assess the key events leading to toxicity following inhalation exposure. Because the optimal testing strategy will vary depending on the substance of interest, here we present a decision tree approach to identify an appropriate non-animal integrated testing strategy that incorporates consideration of a substance's physicochemical properties, relevant mechanisms of toxicity, and available in silico models and in vitro test methods. This decision tree can facilitate standardization of the testing approaches. Case study examples are presented to provide a basis for proof-of-concept testing to illustrate the utility of non-animal approaches to inform hazard identification and risk assessment of humans exposed to inhaled substances. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  4. Multidisciplinary Design Optimization for Aeropropulsion Engines and Solid Modeling/Animation via the Integrated Forced Methods

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.

  5. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2014-01-01

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  6. ReacKnock: Identifying Reaction Deletion Strategies for Microbial Strain Optimization Based on Genome-Scale Metabolic Network

    PubMed Central

    Xu, Zixiang; Zheng, Ping; Sun, Jibin; Ma, Yanhe

    2013-01-01

    Gene knockout has been used as a common strategy to improve microbial strains for producing chemicals. Several algorithms are available to predict the target reactions to be deleted. Most of them apply mixed integer bi-level linear programming (MIBLP) based on metabolic networks, and use duality theory to transform bi-level optimization problem of large-scale MIBLP to single-level programming. However, the validity of the transformation was not proved. Solution of MIBLP depends on the structure of inner problem. If the inner problem is continuous, Karush-Kuhn-Tucker (KKT) method can be used to reformulate the MIBLP to a single-level one. We adopt KKT technique in our algorithm ReacKnock to attack the intractable problem of the solution of MIBLP, demonstrated with the genome-scale metabolic network model of E. coli for producing various chemicals such as succinate, ethanol, threonine and etc. Compared to the previous methods, our algorithm is fast, stable and reliable to find the optimal solutions for all the chemical products tested, and able to provide all the alternative deletion strategies which lead to the same industrial objective. PMID:24348984

  7. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints

    PubMed Central

    2013-01-01

    Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729

  8. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints.

    PubMed

    Ren, Shaogang; Zeng, Bo; Qian, Xiaoning

    2013-01-01

    Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies.

  9. Extensions of D-optimal Minimal Designs for Symmetric Mixture Models

    PubMed Central

    Raghavarao, Damaraju; Chervoneva, Inna

    2017-01-01

    The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574

  10. Efficient receiver tuning using differential evolution strategies

    NASA Astrophysics Data System (ADS)

    Wheeler, Caleb H.; Toland, Trevor G.

    2016-08-01

    Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.

  11. OPTIMIZATION OF INTEGRATED URBAN WET-WEATHER CONTROL STRATEGIES

    EPA Science Inventory

    An optimization method for urban wet weather control (WWC) strategies is presented. The developed optimization model can be used to determine the most cost-effective strategies for the combination of centralized storage-release systems and distributed on-site WWC alternatives. T...

  12. The optimal dynamic immunization under a controlled heterogeneous node-based SIRS model

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Draief, Moez; Yang, Xiaofan

    2016-05-01

    Dynamic immunizations, under which the state of the propagation network of electronic viruses can be changed by adjusting the control measures, are regarded as an alternative to static immunizations. This paper addresses the optimal dynamical immunization under the widely accepted SIRS assumption. First, based on a controlled heterogeneous node-based SIRS model, an optimal control problem capturing the optimal dynamical immunization is formulated. Second, the existence of an optimal dynamical immunization scheme is shown, and the corresponding optimality system is derived. Next, some numerical examples are given to show that an optimal immunization strategy can be worked out by numerically solving the optimality system, from which it is found that the network topology has a complex impact on the optimal immunization strategy. Finally, the difference between a payoff and the minimum payoff is estimated in terms of the deviation of the corresponding immunization strategy from the optimal immunization strategy. The proposed optimal immunization scheme is justified, because it can achieve a low level of infections at a low cost.

  13. Comparison of design strategies for a three-arm clinical trial with time-to-event endpoint: Power, time-to-analysis, and operational aspects.

    PubMed

    Asikanius, Elina; Rufibach, Kaspar; Bahlo, Jasmin; Bieska, Gabriele; Burger, Hans Ulrich

    2016-11-01

    To optimize resources, randomized clinical trials with multiple arms can be an attractive option to simultaneously test various treatment regimens in pharmaceutical drug development. The motivation for this work was the successful conduct and positive final outcome of a three-arm randomized clinical trial primarily assessing whether obinutuzumab plus chlorambucil in patients with chronic lympocytic lymphoma and coexisting conditions is superior to chlorambucil alone based on a time-to-event endpoint. The inference strategy of this trial was based on a closed testing procedure. We compare this strategy to three potential alternatives to run a three-arm clinical trial with a time-to-event endpoint. The primary goal is to quantify the differences between these strategies in terms of the time it takes until the first analysis and thus potential approval of a new drug, number of required events, and power. Operational aspects of implementing the various strategies are discussed. In conclusion, using a closed testing procedure results in the shortest time to the first analysis with a minimal loss in power. Therefore, closed testing procedures should be part of the statistician's standard clinical trials toolbox when planning multiarm clinical trials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  15. A Comparison Study of Item Exposure Control Strategies in MCAT

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Ozdemir, Burhanettin; Wang, Yating; Xiu, Tao

    2016-01-01

    Four item selection indexes with and without exposure control are evaluated and compared in multidimensional computerized adaptive testing (CAT). The four item selection indices are D-optimality, Posterior expectation Kullback-Leibler information (KLP), the minimized error variance of the linear combination score with equal weight (V1), and the…

  16. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  17. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  18. Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen

    2016-03-31

    In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less

  19. Adapted random sampling patterns for accelerated MRI.

    PubMed

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  20. Spatial optimization of operationally relevant large fire confine and point protection strategies: Model development and test cases

    Treesearch

    Yu Wei; Matthew P. Thompson; Jessica R. Haas; Gregory K. Dillon; Christopher D. O’Connor

    2018-01-01

    This study introduces a large fire containment strategy that builds upon recent advances in spatial fire planning, notably the concept of potential wildland fire operation delineations (PODs). Multiple PODs can be clustered together to form a “box” that is referred as the “response POD” (or rPOD). Fire lines would be built along the boundary of an rPOD to contain a...

  1. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  2. Fast Optimization of LiMgMnOx/La2O3 Catalysts for the Oxidative Coupling of Methane.

    PubMed

    Li, Zhinian; He, Lei; Wang, Shenliang; Yi, Wuzhong; Zou, Shihui; Xiao, Liping; Fan, Jie

    2017-01-09

    The development of efficient catalyst for oxidative coupling of methane (OCM) reaction represents a grand challenge in direct conversion of methane into other useful products. Here, we reported that a newly developed combinatorial approach can be used for ultrafast optimization of La 2 O 3 -based multicomponent metal oxide catalysts in OCM reaction. This new approach integrated inkjet printing assisted synthesis (IJP-A) with multidimensional group testing strategy (m-GT) tactfully takes the place of conventionally high-throughput synthesis-and-screen experiment. Just within a week, 2048 formulated LiMgMnO x -La 2 O 3 catalysts in a 64·8·8·8·8 = 262 144 compositional space were fabricated by IJP-A in a four-round synthesis-and-screen process, and an optimized formulation has been successfully identified through only 4·8 = 32 times of tests via m-GT screening strategy. The screening process identifies the most promising ternary composition region is Li 0-0.48 Mg 0-6.54 Mn 0-0.62 -La 100 O x with an external C 2 yield of 10.87% at 700 °C. The yield of C 2 is two times as high as the pure nano-La 2 O 3 . The good performance of the optimized catalyst formulation has been validated by the manual preparation, which further prove the effectiveness of the new combinatorial methodology in fast discovery of heterogeneous catalyst.

  3. Learning stochastic reward distributions in a speeded pointing task.

    PubMed

    Seydell, Anna; McCann, Brian C; Trommershäuser, Julia; Knill, David C

    2008-04-23

    Recent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models. We tested the hypothesis using a task in which subjects had to rapidly point at a target region partly covered by three stochastic penalty regions introduced as "defenders." At movement completion, each defender jumped to a new position drawn randomly from fixed probability distributions. Subjects earned points when they hit the target, unblocked by a defender, and lost points otherwise. Results indicate that after approximately 600 trials, subjects approached optimal behavior. We further tested whether subjects simply learned a set of stimulus-contingent motor plans or the statistics of defenders' movements by training subjects with one penalty distribution and then testing them on a new penalty distribution. Subjects immediately changed their strategy to achieve the same average reward as subjects who had trained with the second penalty distribution. These results indicate that subjects learned the parameters of the defenders' jump distributions and used this knowledge to optimally plan their hand movements under conditions involving stochastic rewards and penalties.

  4. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  5. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  6. Cost-Effectiveness of One-Time Hepatitis C Screening Strategies Among Adolescents and Young Adults in Primary Care Settings.

    PubMed

    Assoumou, Sabrina A; Tasillo, Abriana; Leff, Jared A; Schackman, Bruce R; Drainoni, Mari-Lynn; Horsburgh, C Robert; Barry, M Anita; Regis, Craig; Kim, Arthur Y; Marshall, Alison; Saxena, Sheel; Smith, Peter C; Linas, Benjamin P

    2018-01-18

    High hepatitis C virus (HCV) rates have been reported in young people who inject drugs (PWID). We evaluated the clinical benefit and cost-effectiveness of testing among youth seen in communities with a high overall number of reported HCV cases. We developed a decision analytic model to project quality-adjusted life years (QALYs), costs (2016 US$), and incremental cost-effectiveness ratios (ICERs) of 9 strategies for 1-time testing among 15- to 30-year-olds seen at urban community health centers. Strategies differed in 3 ways: targeted vs routine testing, rapid finger stick vs standard venipuncture, and ordered by physician vs by counselor/tester using standing orders. We performed deterministic and probabilistic sensitivity analyses (PSA) to evaluate uncertainty. Compared to targeted risk-based testing (current standard of care), routine testing increased the lifetime medical cost by $80 and discounted QALYs by 0.0013 per person. Across all strategies, rapid testing provided higher QALYs at a lower cost per QALY gained and was always preferred. Counselor-initiated routine rapid testing was associated with an ICER of $71000/QALY gained. Results were sensitive to offer and result receipt rates. Counselor-initiated routine rapid testing was cost-effective (ICER <$100000/QALY) unless the prevalence of PWID was <0.59%, HCV prevalence among PWID was <16%, reinfection rate was >26 cases per 100 person-years, or reflex confirmatory testing followed all reactive venipuncture diagnostics. In PSA, routine rapid testing was the optimal strategy in 90% of simulations. Routine rapid HCV testing among 15- to 30-year-olds may be cost-effective when the prevalence of PWID is >0.59%. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  7. Mutation testing for directing upfront targeted therapy and post-progression combination therapy strategies in lung adenocarcinoma

    PubMed Central

    Salgia, Ravi

    2016-01-01

    ABSTRACT Introduction: Advances in the biology of non-small-cell lung cancer, especially adenocarcinoma, reveal multiple molecular subtypes driving oncogenesis. Accordingly, individualized targeted therapeutics are based on mutational diagnostics. Areas covered: Advances in strategies and techniques for individualized treatment, particularly of adenocarcinoma, are described through literature review. Approved therapies are established for some molecular subsets, with new driver mutations emerging that represent increasing proportions of patients. Actionable mutations are de novo oncogenic drivers or acquired resistance mediators, and mutational profiling is important for directing therapy. Patients should be monitored for emerging actionable resistance mutations. Liquid biopsy and associated multiplex diagnostics will be important means to monitor patients during treatment. Expert commentary: Outcomes with targeted agents may be improved by integrating mutation screens during treatment to optimize subsequent therapy. In order for this to be translated into impactful patient benefit, appropriate platforms and strategies need to be optimized and then implemented universally. PMID:27139190

  8. Efficient Raman sideband cooling of trapped ions to their motional ground state

    NASA Astrophysics Data System (ADS)

    Che, H.; Deng, K.; Xu, Z. T.; Yuan, W. H.; Zhang, J.; Lu, Z. H.

    2017-07-01

    Efficient cooling of trapped ions is a prerequisite for various applications of the ions in precision spectroscopy, quantum information, and coherence control. Raman sideband cooling is an effective method to cool the ions to their motional ground state. We investigate both numerically and experimentally the optimization of Raman sideband cooling strategies and propose an efficient one, which can simplify the experimental setup as well as reduce the number of cooling pulses. Several cooling schemes are tested and compared through numerical simulations. The simulation result shows that the fixed-width pulses and varied-width pulses have almost the same efficiency for both the first-order and the second-order Raman sideband cooling. The optimized strategy is verified experimentally. A single 25Mg+ ion is trapped in a linear Paul trap and Raman sideband cooled, and the achieved average vibrational quantum numbers under different cooling strategies are evaluated. A good agreement between the experimental result and the simulation result is obtained.

  9. Control strategy of maximum vertical jumps: The preferred countermovement depth may not be fully optimized for jump height.

    PubMed

    Mandic, Radivoj; Knezevic, Olivera M; Mirkov, Dragan M; Jaric, Slobodan

    2016-09-01

    The aim of the present study was to explore the control strategy of maximum countermovement jumps regarding the preferred countermovement depth preceding the concentric jump phase. Elite basketball players and physically active non-athletes were tested on the jumps performed with and without an arm swing, while the countermovement depth was varied within the interval of almost 30 cm around its preferred value. The results consistently revealed 5.1-11.2 cm smaller countermovement depth than the optimum one, but the same difference was more prominent in non-athletes. In addition, although the same differences revealed a marked effect on the recorded force and power output, they reduced jump height for only 0.1-1.2 cm. Therefore, the studied control strategy may not be based solely on the countermovement depth that maximizes jump height. In addition, the comparison of the two groups does not support the concept of a dual-task strategy based on the trade-off between maximizing jump height and minimizing the jumping quickness that should be more prominent in the athletes that routinely need to jump quickly. Further research could explore whether the observed phenomenon is based on other optimization principles, such as the minimization of effort and energy expenditure. Nevertheless, future routine testing procedures should take into account that the control strategy of maximum countermovement jumps is not fully based on maximizing the jump height, while the countermovement depth markedly confound the relationship between the jump height and the assessed force and power output of leg muscles.

  10. Optimal management of adults with pharyngitis – a multi-criteria decision analysis

    PubMed Central

    Singh, Sonal; Dolan, James G; Centor, Robert M

    2006-01-01

    Background Current practice guidelines offer different management recommendations for adults presenting with a sore throat. The key issue is the extent to which the clinical likelihood of a Group A streptococcal infection should affect patient management decisions. To help resolve this issue, we conducted a multi-criteria decision analysis using the Analytic Hierarchy Process. Methods We defined optimal patient management using four criteria: 1) reduce symptom duration; 2) prevent infectious complications, local and systemic; 3) minimize antibiotic side effects, minor and anaphylaxis; and 4) achieve prudent use of antibiotics, avoiding both over-use and under-use. In our baseline analysis we assumed that all criteria and sub-criteria were equally important except minimizing anaphylactic side effects, which was judged very strongly more important than minimizing minor side effects. Management strategies included: a) No test, No treatment; b) Perform a rapid strep test and treat if positive; c) Perform a throat culture and treat if positive; d) Perform a rapid strep test and treat if positive; if negative obtain a throat culture and treat if positive; and e) treat without further tests. We defined four scenarios based on the likelihood of group A streptococcal infection using the Centor score, a well-validated clinical index. Published data were used to estimate the likelihoods of clinical outcomes and the test operating characteristics of the rapid strep test and throat culture for identifying group A streptococcal infections. Results Using the baseline assumptions, no testing and no treatment is preferred for patients with Centor scores of 1; two strategies – culture and treat if positive and rapid strep with culture of negative results – are equally preferable for patients with Centor scores of 2; and rapid strep with culture of negative results is the best management strategy for patients with Centor scores 3 or 4. These results are sensitive to the priorities assigned to the decision criteria, especially avoiding over-use versus under-use of antibiotics, and the population prevalence of Group A streptococcal pharyngitis. Conclusion The optimal clinical management of adults with sore throat depends on both the clinical probability of a group A streptococcal infection and clinical judgments that incorporate individual patient and practice circumstances. PMID:16533386

  11. Multifunctional Mesoscale Observing Networks.

    NASA Astrophysics Data System (ADS)

    Dabberdt, Walter F.; Schlatter, Thomas W.; Carr, Frederick H.; Friday, Elbert W. Joe; Jorgensen, David; Koch, Steven; Pirone, Maria; Ralph, F. Martin; Sun, Juanzhen; Welsh, Patrick; Wilson, James W.; Zou, Xiaolei

    2005-07-01

    More than 120 scientists, engineers, administrators, and users met on 8 10 December 2003 in a workshop format to discuss the needs for enhanced three-dimensional mesoscale observing networks. Improved networks are seen as being critical to advancing numerical and empirical modeling for a variety of mesoscale applications, including severe weather warnings and forecasts, hydrology, air-quality forecasting, chemical emergency response, transportation safety, energy management, and others. The participants shared a clear and common vision for the observing requirements: existing two-dimensional mesoscale measurement networks do not provide observations of the type, frequency, and density that are required to optimize mesoscale prediction and nowcasts. To be viable, mesoscale observing networks must serve multiple applications, and the public, private, and academic sectors must all actively participate in their design and implementation, as well as in the creation and delivery of value-added products. The mesoscale measurement challenge can best be met by an integrated approach that considers all elements of an end-to-end solution—identifying end users and their needs, designing an optimal mix of observations, defining the balance between static and dynamic (targeted or adaptive) sampling strategies, establishing long-term test beds, and developing effective implementation strategies. Detailed recommendations are provided pertaining to nowcasting, numerical prediction and data assimilation, test beds, and implementation strategies.


  12. Learning Grasp Strategies Composed of Contact Relative Motions

    NASA Technical Reports Server (NTRS)

    Platt, Robert, Jr.

    2007-01-01

    Of central importance to grasp synthesis algorithms are the assumptions made about the object to be grasped and the sensory information that is available. Many approaches avoid the issue of sensing entirely by assuming that complete information is available. In contrast, this paper proposes an approach to grasp synthesis expressed in terms of units of control that simultaneously change the contact configuration and sense information about the object and the relative manipulator-object pose. These units of control, known as contact relative motions (CRMs), allow the grasp synthesis problem to be recast as an optimal control problem where the goal is to find a strategy for executing CRMs that leads to a grasp in the shortest number of steps. An experiment is described that uses Robonaut, the NASA-JSC space humanoid, to show that CRMs are a viable means of synthesizing grasps. However, because of the limited amount of information that a single CRM can sense, the optimal control problem may be partially observable. This paper proposes expressing the problem as a k-order Markov Decision Process (MDP) and solving it using Reinforcement Learning. This approach is tested in a simulation of a two-contact manipulator that learns to grasp an object. Grasp strategies learned in simulation are tested on the physical Robonaut platform and found to lead to grasp configurations consistently.

  13. Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui

    2015-06-01

    Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less

  14. Optimal marker placement in hadrontherapy: intelligent optimization strategies with augmented Lagrangian pattern search.

    PubMed

    Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido

    2015-02-01

    In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Decoupled CFD-based optimization of efficiency and cavitation performance of a double-suction pump

    NASA Astrophysics Data System (ADS)

    Škerlavaj, A.; Morgut, M.; Jošt, D.; Nobile, E.

    2017-04-01

    In this study the impeller geometry of a double-suction pump ensuring the best performances in terms of hydraulic efficiency and reluctance of cavitation is determined using an optimization strategy, which was driven by means of the modeFRONTIER optimization platform. The different impeller shapes (designs) are modified according to the optimization parameters and tested with a computational fluid dynamics (CFD) software, namely ANSYS CFX. The simulations are performed using a decoupled approach, where only the impeller domain region is numerically investigated for computational convenience. The flow losses in the volute are estimated on the base of the velocity distribution at the impeller outlet. The best designs are then validated considering the computationally more expensive full geometry CFD model. The overall results show that the proposed approach is suitable for quick impeller shape optimization.

  16. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    NASA Astrophysics Data System (ADS)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  17. An algorithm for testing the efficient market hypothesis.

    PubMed

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

  18. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  19. Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines

    PubMed Central

    Presas, Alexandre; Valero, Carme; Egusquiza, Eduard

    2018-01-01

    Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin. PMID:29601512

  20. Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines.

    PubMed

    Presas, Alexandre; Valentin, David; Egusquiza, Mònica; Valero, Carme; Egusquiza, Eduard

    2018-03-30

    Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin.

  1. Mouse manipulation through single-switch scanning.

    PubMed

    Blackstien-Adler, Susie; Shein, Fraser; Quintal, Janet; Birch, Shae; Weiss, Patrice L Tamar

    2004-01-01

    Given the current extensive reliance on the graphical user interface, independent access to computer software requires that users be able to manipulate a pointing device of some type (e.g., mouse, trackball) or be able to emulate a mouse by some other means (e.g., scanning). The purpose of the present study was to identify one or more optimal single-switch scanning mouse emulation strategies. Four alternative scanning strategies (continuous Cartesian, discrete Cartesian, rotational, and hybrid quadrant/continuous Cartesian) were selected for testing based on current market availability as well as on theoretical considerations of their potential speed and accuracy. Each strategy was evaluated using a repeated measures study design by means of a test program that permitted mouse emulation via any one of four scanning strategies in a motivating environment; response speed and accuracy could be automatically recorded and considered in view of the motor, cognitive, and perceptual demands of each scanning strategy. Ten individuals whose disabilities required them to operate a computer via single-switch scanning participated in the study. Results indicated that Cartesian scanning was the preferred and most effective scanning strategy. There were no significant differences between results from the Continuous Cartesian and Discrete Cartesian scanning strategies. Rotational scanning was quite slow with respect to the other strategies, although it was equally accurate. Hybrid Quadrant scanning improved access time but at the cost of fewer correct selections. These results demonstrated the importance of testing and comparing alternate single-switch scanning strategies.

  2. Mixed-Strategy Chance Constrained Optimal Control

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.

    2013-01-01

    This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.

  3. Optimal vaccination strategies and rational behaviour in seasonal epidemics.

    PubMed

    Doutor, Paulo; Rodrigues, Paula; Soares, Maria do Céu; Chalub, Fabio A C C

    2016-12-01

    We consider a SIRS model with time dependent transmission rate. We assume time dependent vaccination which confers the same immunity as natural infection. We study two types of vaccination strategies: (i) optimal vaccination, in the sense that it minimizes the effort of vaccination in the set of vaccination strategies for which, for any sufficiently small perturbation of the disease free state, the number of infectious individuals is monotonically decreasing; (ii) Nash-equilibria strategies where all individuals simultaneously minimize the joint risk of vaccination versus the risk of the disease. The former case corresponds to an optimal solution for mandatory vaccinations, while the second corresponds to the equilibrium to be expected if vaccination is fully voluntary. We are able to show the existence of both optimal and Nash strategies in a general setting. In general, these strategies will not be functions but Radon measures. For specific forms of the transmission rate, we provide explicit formulas for the optimal and the Nash vaccination strategies.

  4. Optimal Keno Strategies and the Central Limit Theorem

    ERIC Educational Resources Information Center

    Johnson, Roger W.

    2006-01-01

    For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…

  5. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  6. Optimal GENCO bidding strategy

    NASA Astrophysics Data System (ADS)

    Gao, Feng

    Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.

  7. Age differences in coupling of intraindividual variability in mnemonic strategies and practice-related associative recall improvements.

    PubMed

    Hertzog, Christopher; Lövdén, Martin; Lindenberger, Ulman; Schmiedek, Florian

    2017-09-01

    The importance of encoding strategies for associative recall is well established, but there have been no studies of aging and intraindividual variability (IAV) in strategy use during extended practice. We observed strategy use and cued-recall test performance over 101 days of practice in 101 younger adults (M = 25.6 years) and 103 older adults (M = 71.3 years) sandwiched by a pretest and posttest battery including an associative recall test. Each practice session included 2 lists of 12 number-noun paired-associate (PA) items (e.g., 23-DOGS), presented for brief exposures titrated to maintain below-ceiling performance throughout practice. Participants reported strategy use (e.g., rote repetition, imagery) after each test. Substantial IAV in strategy use was detected that was coupled with performance; lists studied with normatively effective strategies (e.g., imagery) generated higher PA recall than lists studied with less effective strategies (e.g., rote repetition). In comparison to younger adults, older adults' practice (a) relied more on repetition and less on effective strategies, (b) showed lower levels of IAV in effective strategy use, and (c) had lower within-person strategy-recall coupling, especially late in practice. Individual differences in pretest-posttest gains in PA recall were predicted by average level of effective strategy use in young adults but by strategy-recall coupling in older adults. Results are consistent with the hypothesis that experiencing variability in strategic outcomes during practice helps hone the effectiveness of strategic encoding behavior, and that older adults' reduced degree of pretest-posttest gains is influenced by lower likelihood of using and optimizing effective strategies through practice. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Design Space Approach for Preservative System Optimization of an Anti-Aging Eye Fluid Emulsion.

    PubMed

    Lourenço, Felipe Rebello; Francisco, Fabiane Lacerda; Ferreira, Márcia Regina Spuri; Andreoli, Terezinha De Jesus; Löbenberg, Raimar; Bou-Chacra, Nádia

    2015-01-01

    The use of preservatives must be optimized in order to ensure the efficacy of an antimicrobial system as well as the product safety. Despite the wide variety of preservatives, the synergistic or antagonistic effects of their combinations are not well established and it is still an issue in the development of pharmaceutical and cosmetic products. The purpose of this paper was to establish a space design using a simplex-centroid approach to achieve the lowest effective concentration of 3 preservatives (methylparaben, propylparaben, and imidazolidinyl urea) and EDTA for an emulsion cosmetic product. Twenty-two formulae of emulsion differing only by imidazolidinyl urea (A: 0.00 to 0.30% w/w), methylparaben (B: 0.00 to 0.20% w/w), propylparaben (C: 0.00 to 0.10% w/w) and EDTA (D: 0.00 to 0.10% w/w) concentrations were prepared. They were tested alone and in binary, ternary and quaternary combinations. Aliquots of these formulae were inoculated with several microorganisms. An electrochemical method was used to determine microbial burden immediately after inoculation and after 2, 4, 8, 12, 24, 48, and 168 h. An optimization strategy was used to obtain the concentrations of preservatives and EDTA resulting in a most effective preservative system of all microorganisms simultaneously. The use of preservatives and EDTA in combination has the advantage of exhibiting a potential synergistic effect against a wider spectrum of microorganisms. Based on graphic and optimization strategies, we proposed a new formula containing a quaternary combination (A: 55%; B: 30%; C: 5% and D: 10% w/w), which complies with the specification of a conventional challenge test. A design space approach was successfully employed in the optimization of concentrations of preservatives and EDTA in an emulsion cosmetic product.

  9. Driver electronics design and control for a total artificial heart linear motor.

    PubMed

    Unthan, Kristin; Cuenca-Navalon, Elena; Pelletier, Benedikt; Finocchiaro, Thomas; Steinseifer, Ulrich

    2018-01-27

    For any implantable device size and efficiency are critical properties. Thus, a linear motor for a Total Artificial Heart was optimized with focus on driver electronics and control strategies. Hardware requirements were defined from power supply and motor setup. Four full bridges were chosen for the power electronics. Shunt resistors were set up for current measurement. Unipolar and bipolar switching for power electronics control were compared regarding current ripple and power losses. Here, unipolar switching showed smaller current ripple and required less power to create the necessary motor forces. Based on calculations for minimal power losses Lorentz force was distributed to the actor's four coils. The distribution was determined as ratio of effective magnetic flux through each coil, which was captured by a force test rig. Static and dynamic measurements under physiological conditions analyzed interaction of control and hardware and all efficiencies were over 89%. In conclusion, the designed electronics, optimized control strategy and applied current distribution create the required motor force and perform optimal under physiological conditions. The developed driver electronics and control offer optimized size and efficiency for any implantable or portable device with multiple independent motor coils. Graphical Abstract ᅟ.

  10. Real-time maneuver optimization of space-based robots in a dynamic environment: Theory and on-orbit experiments

    NASA Astrophysics Data System (ADS)

    Chamitoff, Gregory E.; Saenz-Otero, Alvar; Katz, Jacob G.; Ulrich, Steve; Morrell, Benjamin J.; Gibbens, Peter W.

    2018-01-01

    This paper presents the development of a real-time path-planning optimization approach to controlling the motion of space-based robots. The algorithm is capable of planning three dimensional trajectories for a robot to navigate within complex surroundings that include numerous static and dynamic obstacles, path constraints and performance limitations. The methodology employs a unique transformation that enables rapid generation of feasible solutions for complex geometries, making it suitable for application to real-time operations and dynamic environments. This strategy was implemented on the Synchronized Position Hold Engage Reorient Experimental Satellite (SPHERES) test-bed on the International Space Station (ISS), and experimental testing was conducted onboard the ISS during Expedition 17 by the first author. Lessons learned from the on-orbit tests were used to further refine the algorithm for future implementations.

  11. Suboptimal LQR-based spacecraft full motion control: Theory and experimentation

    NASA Astrophysics Data System (ADS)

    Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.

    2016-05-01

    This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.

  12. Design of optimal groundwater remediation systems under flexible environmental-standard constraints.

    PubMed

    Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing

    2015-01-01

    In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.

  13. The cost-effectiveness of screening for colorectal cancer.

    PubMed

    Telford, Jennifer J; Levy, Adrian R; Sambrook, Jennifer C; Zou, Denise; Enns, Robert A

    2010-09-07

    Published decision analyses show that screening for colorectal cancer is cost-effective. However, because of the number of tests available, the optimal screening strategy in Canada is unknown. We estimated the incremental cost-effectiveness of 10 strategies for colorectal cancer screening, as well as no screening, incorporating quality of life, noncompliance and data on the costs and benefits of chemotherapy. We used a probabilistic Markov model to estimate the costs and quality-adjusted life expectancy of 50-year-old average-risk Canadians without screening and with screening by each test. We populated the model with data from the published literature. We calculated costs from the perspective of a third-party payer, with inflation to 2007 Canadian dollars. Of the 10 strategies considered, we focused on three tests currently being used for population screening in some Canadian provinces: low-sensitivity guaiac fecal occult blood test, performed annually; fecal immunochemical test, performed annually; and colonoscopy, performed every 10 years. These strategies reduced the incidence of colorectal cancer by 44%, 65% and 81%, and mortality by 55%, 74% and 83%, respectively, compared with no screening. These strategies generated incremental cost-effectiveness ratios of $9159, $611 and $6133 per quality-adjusted life year, respectively. The findings were robust to probabilistic sensitivity analysis. Colonoscopy every 10 years yielded the greatest net health benefit. Screening for colorectal cancer is cost-effective over conventional levels of willingness to pay. Annual high-sensitivity fecal occult blood testing, such as a fecal immunochemical test, or colonoscopy every 10 years offer the best value for the money in Canada.

  14. Strategies to induce broadly protective antibody responses to viral glycoproteins.

    PubMed

    Krammer, F

    2017-05-01

    Currently, several universal/broadly protective influenza virus vaccine candidates are under development. Many of these vaccines are based on strategies to induce protective antibody responses against the surface glycoproteins of antigenically and genetically diverse influenza viruses. These strategies might also be applicable to surface glycoproteins of a broad range of other important viral pathogens. Areas covered: Common strategies include sequential vaccination with divergent antigens, multivalent approaches, vaccination with glycan-modified antigens, vaccination with minimal antigens and vaccination with antigens that have centralized/optimized sequences. Here we review these strategies and the underlying concepts. Furthermore, challenges, feasibility and applicability to other viral pathogens are discussed. Expert commentary: Several broadly protective/universal influenza virus vaccine strategies will be tested in humans in the coming years. If successful in terms of safety and immunological readouts, they will move forward into efficacy trials. In the meantime, successful vaccine strategies might also be applied to other antigenically diverse viruses of concern.

  15. Detection of Abnormal Muscle Activations during Walking Following Spinal Cord Injury (SCI)

    ERIC Educational Resources Information Center

    Wang, Ping; Low, K. H.; McGregor, Alison H.; Tow, Adela

    2013-01-01

    In order to identify optimal rehabilitation strategies for spinal cord injury (SCI) participants, assessment of impaired walking is required to detect, monitor and quantify movement disorders. In the proposed assessment, ten healthy and seven SCI participants were recruited to perform an over-ground walking test at slow walking speeds. SCI…

  16. 'Who is the ideal candidate?': decisions and issues relating to visual neuroprosthesis development, patient testing and neuroplasticity

    NASA Astrophysics Data System (ADS)

    Merabet, Lotfi B.; Rizzo, Joseph F., III; Pascual-Leone, Alvaro; Fernandez, Eduardo

    2007-03-01

    Appropriate delivery of electrical stimulation to intact visual structures can evoke patterned sensations of light in individuals who have been blind for many years. This pivotal finding has lent credibility to the concept of restoring functional vision by artificial means. As numerous groups worldwide pursue human clinical testing with visual prosthetic devices, it is becoming increasingly clear that there remains a considerable gap between the challenges of prosthetic device development and the rehabilitative strategies needed to implement this new technology in patients. An important area of future work will be the development of appropriate pre- and post-implantation measures of performance and establishing candidate selection criteria in order to quantify technical advances, guide future device design and optimize therapeutic success. We propose that the selection of an 'ideal' candidate should also be considered within the context of the variable neuroplastic changes that follow vision loss. Specifically, an understanding of the adaptive and compensatory changes that occur within the brain could assist in guiding the development of post-implantation rehabilitative strategies and optimize behavioral outcomes.

  17. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  18. Identification of vehicle suspension parameters by design optimization

    NASA Astrophysics Data System (ADS)

    Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.

    2014-05-01

    The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.

  19. The cost of illness attributable to diabetic foot and cost-effectiveness of secondary prevention in Peru.

    PubMed

    Cárdenas, María Kathia; Mirelman, Andrew J; Galvin, Cooper J; Lazo-Porras, María; Pinto, Miguel; Miranda, J Jaime; Gilman, Robert H

    2015-10-26

    Diabetes mellitus is a public health challenge worldwide, and roughly 25% of patients with diabetes in developing countries will develop at least one foot ulcer during their lifetime. The gravest outcome of an ulcerated foot is amputation, leading to premature death and larger economic costs. This study aimed to estimate the economic costs of diabetic foot in high-risk patients in Peru in 2012 and to model the cost-effectiveness of a year-long preventive strategy for foot ulceration including: sub-optimal care (baseline), standard care as recommended by the International Diabetes Federation, and standard care plus daily self-monitoring of foot temperature. A decision tree model using a population prevalence-based approach was used to calculate the costs and the incremental cost-effectiveness ratio (ICER). Outcome measures were deaths and major amputations, uncertainty was tested with a one-way sensitivity analysis. The direct costs for prevention and management with sub-optimal care for high-risk diabetics is around US$74.5 million dollars in a single year, which decreases to US$71.8 million for standard care and increases to US$96.8 million for standard care plus temperature monitoring. The implementation of a standard care strategy would avert 791 deaths and is cost-saving in comparison to sub-optimal care. For standard care plus temperature monitoring compared to sub-optimal care the ICER rises to US$16,124 per death averted and averts 1,385 deaths. Diabetic foot complications are highly costly and largely preventable in Peru. The implementation of a standard care strategy would lead to net savings and avert deaths over a one-year period. More intensive prevention strategies such as incorporating temperature monitoring may also be cost-effective.

  20. A novel metaheuristic for continuous optimization problems: Virus optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue

    2016-01-01

    A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.

  1. Incorporating Objective Function Information Into the Feasibility Rule for Constrained Evolutionary Optimization.

    PubMed

    Wang, Yong; Wang, Bing-Chuan; Li, Han-Xiong; Yen, Gary G

    2016-12-01

    When solving constrained optimization problems by evolutionary algorithms, an important issue is how to balance constraints and objective function. This paper presents a new method to address the above issue. In our method, after generating an offspring for each parent in the population by making use of differential evolution (DE), the well-known feasibility rule is used to compare the offspring and its parent. Since the feasibility rule prefers constraints to objective function, the objective function information has been exploited as follows: if the offspring cannot survive into the next generation and if the objective function value of the offspring is better than that of the parent, then the offspring is stored into a predefined archive. Subsequently, the individuals in the archive are used to replace some individuals in the population according to a replacement mechanism. Moreover, a mutation strategy is proposed to help the population jump out of a local optimum in the infeasible region. Note that, in the replacement mechanism and the mutation strategy, the comparison of individuals is based on objective function. In addition, the information of objective function has also been utilized to generate offspring in DE. By the above processes, this paper achieves an effective balance between constraints and objective function in constrained evolutionary optimization. The performance of our method has been tested on two sets of benchmark test functions, namely, 24 test functions at IEEE CEC2006 and 18 test functions with 10-D and 30-D at IEEE CEC2010. The experimental results have demonstrated that our method shows better or at least competitive performance against other state-of-the-art methods. Furthermore, the advantage of our method increases with the increase of the number of decision variables.

  2. Research on the application of vehicle network in optimization of automobile supply supply chain

    NASA Astrophysics Data System (ADS)

    Jing, Xuelei; Jia, Baoxian

    2017-09-01

    The four key areas of the development of Internet-connected (intelligent transportation) with great potential for development,environmental monitoring, goods tracking, and the development of smart grid are the core supporting technologies of many applications. In order to improve the adaptability of data distribution, so that it can be used in urban, rural or highway and other different car networking scenarios, the study test and hypothetical test of the technical means to accurately estimate the different car network scene parameters indicators, and then different scenarios take different distribution strategies. Taking into account the limited nature of the data distribution of the Internet network data, the paper uses the idea of a customer to optimize the simulation

  3. Individualized strategy for clopidogrel suspension in patients undergoing off-pump coronary surgery for acute coronary syndrome: a case-control study.

    PubMed

    Mannacio, Vito; Meier, Pascal; Antignano, Anita; Di Tommaso, Luigi; De Amicis, Vincenzo; Vosa, Carlo

    2014-10-01

    An increasing number of patients presenting for urgent coronary surgery have been exposed to clopidogrel, which constitutes a risk of bleeding and related events. Based on the wide variability in clopidogrel response and platelet function recovery after cessation, we evaluated the role of point-of-care platelet function testing to define the optimal time for off-pump coronary artery bypass graft (CABG) surgery in a case-control study. Three equally matched groups (300 patients in total) undergoing isolated off-pump CABG for acute coronary syndrome were compared. Group A were treated with clopidogrel and prospectively underwent a strategy guided by platelet function testing. Outcomes were compared with 2 propensity score matched groups: group B underwent CABG after the currently recommended 5 days without clopidogrel; group C were never exposed to clopidogrel. Patients in group A had reduced postoperative bleeding compared with those in group B (523±202 mL vs 851±605 mL; P<.001) and a lower number of units packed red blood cells (PRBCs) transfused during the postoperative hospital stay (1.2±1.6 units vs 1.9±1.8 units; P=.004). Postoperative bleeding and the number of units of PRBCs transfused were similar in group A and group C. There was no difference in blood-derived products and platelet consumption, mortality, or the need for reoperation among the groups. Patients in group A waited 3.6±1.7 days for surgery. The strategy used for group A saved 280 days of hospital stay in total. The strategy guided by platelet function testing for off-pump CABG offers improved guidance for optimal timing of CABG in patients treated with clopidogrel. This strategy significantly reduces postoperative bleeding and blood consumption, and has a shorter waiting time for surgery than current clinical practice. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  4. Multiple objective optimization in reliability demonstration test

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela; Li, Mingyang

    2016-10-01

    Reliability demonstration tests are usually performed in product design or validation processes to demonstrate whether a product meets specified requirements on reliability. For binomial demonstration tests, the zero-failure test has been most commonly used due to its simplicity and use of minimum sample size to achieve an acceptable consumer’s risk level. However, this test can often result in unacceptably high risk for producers as well as a low probability of passing the test even when the product has good reliability. This paper explicitly explores the interrelationship between multiple objectives that are commonly of interest when planning a demonstration test andmore » proposes structured decision-making procedures using a Pareto front approach for selecting an optimal test plan based on simultaneously balancing multiple criteria. Different strategies are suggested for scenarios with different user priorities and graphical tools are developed to help quantify the trade-offs between choices and to facilitate informed decision making. As a result, potential impacts of some subjective user inputs on the final decision are studied to offer insights and useful guidance for general applications.« less

  5. Optimality of the barrier strategy in de Finetti's dividend problem for spectrally negative Lévy processes: An alternative approach

    NASA Astrophysics Data System (ADS)

    Yin, Chuancun; Wang, Chunwei

    2009-11-01

    The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.

  6. Optimum allocation of test resources and comparison of breeding strategies for hybrid wheat.

    PubMed

    Longin, C Friedrich H; Mi, Xuefei; Melchinger, Albrecht E; Reif, Jochen C; Würschum, Tobias

    2014-10-01

    The use of a breeding strategy combining the evaluation of line per se with testcross performance maximizes annual selection gain for hybrid wheat breeding. Recent experimental studies confirmed a high commercial potential for hybrid wheat requiring the design of optimum breeding strategies. Our objectives were to (1) determine the optimum allocation of the type and number of testers, the number of test locations and the number of doubled haploid lines for different breeding strategies, (2) identify the best breeding strategy and (3) elaborate key parameters for an efficient hybrid wheat breeding program. We performed model calculations using the selection gain for grain yield as target variable to optimize the number of lines, testers and test locations in four different breeding strategies. A breeding strategy (BS2) combining the evaluation of line per se performance and general combining ability (GCA) had a far larger annual selection gain across all considered scenarios than a breeding strategy (BS1) focusing only on GCA. In the combined strategy, the production of testcross seed conducted in parallel with the first yield trial for line per se performance (BS2rapid) resulted in a further increase of the annual selection gain. For the current situation in hybrid wheat, this relative superiority of the strategy BS2rapid amounted to 67 % in annual selection gain compared to BS1. Varying a large number of parameters, we identified the high costs for hybrid seed production and the low variance of GCA in hybrid wheat breeding as key parameters limiting selection gain in BS2rapid.

  7. Optimal management strategies in variable environments: Stochastic optimal control methods

    USGS Publications Warehouse

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both the discount rate and the climatic patterns on optimal harvest strategics. In general, decreases in either the discount rate or in the frequency of favorable weather patterns lcd to a more conservative defoliation policy. This did not hold, however, for plants in states of low vigor. Optimal control for shadscale and winterfat tended to stabilize on a policy of heavy defoliation stress, followed by one or more seasons of rest. Big sagebrush required a policy of heavy summer defoliation when sufficient active shoot material is present at the beginning of the growing season. The comparison of fixed and optimal strategies indicated considerable improvement in defoliation yields when optimal strategies are followed. The superior performance was attributable to increased defoliation of plants in states of high vigor. Improvements were found for both discounted and undiscounted yields.

  8. CMOST: an open-source framework for the microsimulation of colorectal cancer screening strategies.

    PubMed

    Prakash, Meher K; Lang, Brian; Heinrich, Henriette; Valli, Piero V; Bauerfeind, Peter; Sonnenberg, Amnon; Beerenwinkel, Niko; Misselwitz, Benjamin

    2017-06-05

    Colorectal cancer (CRC) is a leading cause of cancer-related mortality. CRC incidence and mortality can be reduced by several screening strategies, including colonoscopy, but randomized CRC prevention trials face significant obstacles such as the need for large study populations with long follow-up. Therefore, CRC screening strategies will likely be designed and optimized based on computer simulations. Several computational microsimulation tools have been reported for estimating efficiency and cost-effectiveness of CRC prevention. However, none of these tools is publicly available. There is a need for an open source framework to answer practical questions including testing of new screening interventions and adapting findings to local conditions. We developed and implemented a new microsimulation model, Colon Modeling Open Source Tool (CMOST), for modeling the natural history of CRC, simulating the effects of CRC screening interventions, and calculating the resulting costs. CMOST facilitates automated parameter calibration against epidemiological adenoma prevalence and CRC incidence data. Predictions of CMOST were highly similar compared to a large endoscopic CRC prevention study as well as predictions of existing microsimulation models. We applied CMOST to calculate the optimal timing of a screening colonoscopy. CRC incidence and mortality are reduced most efficiently by a colonoscopy between the ages of 56 and 59; while discounted life years gained (LYG) is maximal at 49-50 years. With a dwell time of 13 years, the most cost-effective screening is at 59 years, at $17,211 discounted USD per LYG. While cost-efficiency varied according to dwell time it did not influence the optimal time point of screening interventions within the tested range. Predictions of CMOST are highly similar compared to a randomized CRC prevention trial as well as those of other microsimulation tools. This open source tool will enable health-economics analyses in for various countries, health-care scenarios and CRC prevention strategies. CMOST is freely available under the GNU General Public License at https://gitlab.com/misselwb/CMOST.

  9. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  10. Interrelations of stress, optimism and control in older people's psychological adjustment.

    PubMed

    Bretherton, Susan Jane; McLean, Louise Anne

    2015-06-01

    To investigate the influence of perceived stress, optimism and perceived control of internal states on the psychological adjustment of older adults. The sample consisted of 212 older adults, aged between 58 and 103 (M = 80.42 years, SD = 7.31 years), living primarily in retirement villages in Melbourne, Victoria. Participants completed the Perceived Stress Scale, Life Orientation Test-Revised, Perceived Control of Internal States Scale and the World Health Organisation Quality of Life-Bref. Optimism significantly mediated the relationship between older people's perceived stress and psychological health, and perceived control of internal states mediated the relationships among stress, optimism and psychological health. The variables explained 49% of the variance in older people's psychological adjustment. It is suggested that strategies to improve optimism and perceived control may improve the psychological adjustment of older people struggling to adapt to life's stressors. © 2014 ACOTA.

  11. Real-time discrete suboptimal control for systems with input and state delays: Experimental tests on a dehydration process.

    PubMed

    Rodríguez-Guerrero, Liliam; Santos-Sánchez, Omar-Jacobo; Cervantes-Escorcia, Nicolás; Romero, Hugo

    2017-11-01

    This article presents a suboptimal control strategy with finite horizon for affine nonlinear discrete systems with both state and input delays. The Dynamic Programming Approach is used to obtain the suboptimal control sequence, but in order to avoid the computation of the Bellman functional, a numerical approximation of this function is proposed in every step. The feasibility of our proposal is demonstrated via an experimental test on a dehydration process and the obtained results show a good performance and behavior of this process. Then in order to demonstrate the benefits of using this kind of control strategy, the results are compared with a non optimal control strategy, particularly with respect to results produced by an industrial Proportional Integral Derivative (PID) Honeywell controller, which is tuned using the Ziegler-Nichols method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Effects of disulfiram on choice behavior in a rodent gambling task: association with catecholamine levels.

    PubMed

    Di Ciano, Patricia; Manvich, Daniel F; Pushparaj, Abhiram; Gappasov, Andrew; Hess, Ellen J; Weinshenker, David; Le Foll, Bernard

    2018-01-01

    Gambling disorder is a growing societal concern, as recognized by its recent classification as an addictive disorder in the DSM-5. Case reports have shown that disulfiram reduces gambling-related behavior in humans. The purpose of the present study was to determine whether disulfiram affects performance on a rat gambling task, a rodent version of the Iowa gambling task in humans, and whether any changes were associated with alterations in dopamine and/or norepinephrine levels. Rats were administered disulfiram prior to testing on the rat gambling task or prior to analysis of dopamine or norepinephrine levels in brain homogenates. Rats in the behavioral task were divided into two subgroups (optimal vs suboptimal) based on their baseline levels of performance in the rat gambling task. Rats in the optimal group chose the advantageous strategy more, and rats in the suboptimal group (a parallel to problem gambling) chose the disadvantageous strategy more. Rats were not divided into optimal or suboptimal groups prior to neurochemical analysis. Disulfiram administered 2 h, but not 30 min, before the task dose-dependently improved choice behavior in the rats with an initial disadvantageous "gambling-like" strategy, while having no effect on the rats employing an advantageous strategy. The behavioral effects of disulfiram were associated with increased striatal dopamine and decreased striatal norepinephrine. These findings suggest that combined actions on dopamine and norepinephrine may be a useful treatment for gambling disorders.

  13. Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir

    PubMed Central

    Montaner, Luis J.

    2017-01-01

    Abstract Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. PMID:28520969

  14. A FABP-ulous 'rule out' strategy? Heart fatty acid binding protein and troponin for rapid exclusion of acute myocardial infarction.

    PubMed

    Body, Richard; McDowell, Garry; Carley, Simon; Wibberley, Christopher; Ferguson, Jamie; Mackway-Jones, Kevin

    2011-08-01

    Many Emergency Departments (EDs) utilise 'triple marker' testing with CK-MB, myoglobin and troponin I (cTnI) to exclude acute myocardial infarction (AMI) within hours of presentation. We evaluated the ability of 8 biomarkers to rapidly exclude AMI at the point of presentation and investigated whether 'triple marker' testing represents the optimal multimarker strategy. We recruited patients who presented to the ED with suspected cardiac chest pain occurring within 24 h. Blood was drawn at the time of presentation. Diagnostic value was assessed by calculating the area under the ROC curve (AUC) and a multivariate model was constructed by logistic regression. The primary outcome was a diagnosis of AMI, established by ≥12-h troponin testing in all patients. 705 included patients underwent venepuncture a median of 3.5 h after symptom onset. Heart fatty acid binding protein (H-FABP) had an AUC of 0.86 (95% CI 0.82-0.90), which was significantly higher than any other biomarker including cTnI. While no single biomarker could enable exclusion of AMI, multivariate analysis identified cTnI and H-FABP as the optimal biomarker combination. Combined with clinical risk stratification, this strategy had a sensitivity of 96.9%, specificity of 54.7%, PPV 32.4% and NPV 98.8%. We have derived an algorithm that would enable AMI to be immediately excluded in 315 (44.7%) patients at the cost of missing 6 AMIs per 1000 patients treated. While the risk is likely to be unacceptable for clinical implementation, we have highlighted an area for future development using serial testing and increasingly sensitive assays. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Field Test of Wake Steering at an Offshore Wind Farm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleming, Paul; Annoni, Jennifer; Shah, Jigar J.

    In this paper, a field test of wake steering control is presented. The field test is the result of a collaboration between the National Renewable Energy Laboratory (NREL) and Envision Energy, a smart energy management company and turbine manufacturer. In the campaign, an array of turbines within an operating commercial offshore wind farm in China have the normal yaw controller modified to implement wake steering according to a yaw control strategy. The strategy was designed using NREL wind farm models, including a computational fluid dynamics model, SOWFA, for understanding wake dynamics and an engineering model, FLORIS, for yaw control optimization.more » Results indicate that, within the certainty afforded by the data, the wake-steering controller was successful in increasing power capture, by amounts similar to those predicted from the models.« less

  16. Field Test of Wake Steering at an Offshore Wind Farm

    DOE PAGES

    Fleming, Paul; Annoni, Jennifer; Shah, Jigar J.; ...

    2017-02-06

    In this paper, a field test of wake steering control is presented. The field test is the result of a collaboration between the National Renewable Energy Laboratory (NREL) and Envision Energy, a smart energy management company and turbine manufacturer. In the campaign, an array of turbines within an operating commercial offshore wind farm in China have the normal yaw controller modified to implement wake steering according to a yaw control strategy. The strategy was designed using NREL wind farm models, including a computational fluid dynamics model, SOWFA, for understanding wake dynamics and an engineering model, FLORIS, for yaw control optimization.more » Results indicate that, within the certainty afforded by the data, the wake-steering controller was successful in increasing power capture, by amounts similar to those predicted from the models.« less

  17. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  18. Optimizing hydraulic fracture design in the diatomite formation, Lost Hills Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, D.G.; Klins, M.A.; Manrique, J.F.

    1996-12-31

    Since 1988, over 1.3 billion pounds of proppant have been placed in the Lost Hills Field of Kern County. California in over 2700 hydraulic fracture treatments involving investments of about $150 million. In 1995, systematic reevaluation of the standard, field trial-based fracture design began. Reservoir, geomechanical, and hydraulic fracture characterization; production and fracture modeling; sensitivity analysis; and field test results were integrated to optimize designs with regard to proppant volume, proppant ramps, and perforating strategy. The results support a reduction in proppant volume from 2500 to 1700 lb/ft which will save about $50,000 per well, totalling over $3 million permore » year. Vertical coverage was found to be a key component of fracture quality which could be optimized by eliminating perforations from lower stress intervals, reducing the total number of perforations, and reducing peak slurry loading from 16 to 12 ppa. A relationship between variations in lithology, pore pressure, and stress was observed. Point-source, perforating strategies were investigated and variable multiple fracture behavior was observed. The discussed approach has application in areas where stresses are variable; pay zones are thick; hydraulic fracture design is based primarily on empirical, trial-and-error field test results; and effective, robust predictive models involving real-data feedback have not been incorporated into the design improvement process.« less

  19. An optimized proportional-derivative controller for the human upper extremity with gravity.

    PubMed

    Jagodnik, Kathleen M; Blana, Dimitra; van den Bogert, Antonie J; Kirsch, Robert F

    2015-10-15

    When Functional Electrical Stimulation (FES) is used to restore movement in subjects with spinal cord injury (SCI), muscle stimulation patterns should be selected to generate accurate and efficient movements. Ideally, the controller for such a neuroprosthesis will have the simplest architecture possible, to facilitate translation into a clinical setting. In this study, we used the simulated annealing algorithm to optimize two proportional-derivative (PD) feedback controller gain sets for a 3-dimensional arm model that includes musculoskeletal dynamics and has 5 degrees of freedom and 22 muscles, performing goal-oriented reaching movements. Controller gains were optimized by minimizing a weighted sum of position errors, orientation errors, and muscle activations. After optimization, gain performance was evaluated on the basis of accuracy and efficiency of reaching movements, along with three other benchmark gain sets not optimized for our system, on a large set of dynamic reaching movements for which the controllers had not been optimized, to test ability to generalize. Robustness in the presence of weakened muscles was also tested. The two optimized gain sets were found to have very similar performance to each other on all metrics, and to exhibit significantly better accuracy, compared with the three standard gain sets. All gain sets investigated used physiologically acceptable amounts of muscular activation. It was concluded that optimization can yield significant improvements in controller performance while still maintaining muscular efficiency, and that optimization should be considered as a strategy for future neuroprosthesis controller design. Published by Elsevier Ltd.

  20. Precision cancer immunotherapy: optimizing dendritic cell-based strategies to induce tumor antigen-specific T-cell responses against individual patient tumors.

    PubMed

    Osada, Takuya; Nagaoka, Koji; Takahara, Masashi; Yang, Xiao Yi; Liu, Cong-Xiao; Guo, Hongtao; Roy Choudhury, Kingshuk; Hobeika, Amy; Hartman, Zachary; Morse, Michael A; Lyerly, H Kim

    2015-05-01

    Most dendritic cell (DC)-based vaccines have loaded the DC with defined antigens, but loading with autologos tumor-derived antigens would generate DCs that activate personalized tumor-specific T-cell responses. We hypothesized that DC matured with an optimized combination of reagents and loaded with tumor-derived antigens using a clinically feasible electroporation strategy would induce potent antitumor immunity. We first studied the effects on DC maturation and antigen presentation of the addition of picibanil (OK432) to a combination of zoledronic acid, tumor necrosis factor-α, and prostaglandin E2. Using DC matured with the optimized combination, we tested 2 clinically feasible sources of autologous antigen for electroloading, total tumor mRNA or total tumor lysate, to determine which stimulated more potent antigen-specific T cells in vitro and activated more potent antitumor immunity in vivo. The combination of tumor necrosis factor-α/prostaglandin E2/zoledronic acid/OK432 generated DC with high expression of maturation markers and antigen-specific T-cell stimulatory function in vitro. Mature DC electroloaded with tumor-derived mRNA [mRNA electroporated dendritic cell (EPDC)] induced greater expansion of antigen-specific T cells in vitro than DC electroloaded with tumor lysate (lysate EPDC). In a therapeutic model of MC38-carcinoembryonic antigen colon cancer-bearing mice, vaccination with mRNA EPDC induced the most efficient anti-carcinoembryonic antigen cellular immune response, which significantly suppressed tumor growth. In conclusion, mature DC electroloaded with tumor-derived mRNA are a potent cancer vaccine, especially useful when specific tumor antigens for vaccination have not been identified, allowing autologous tumor, and if unavailable, allogeneic cell lines to be used as an unbiased source of antigen. Our data support clinical testing of this strategy.

  1. Human place and response learning: navigation strategy selection, pupil size and gaze behavior.

    PubMed

    de Condappa, Olivier; Wiener, Jan M

    2016-01-01

    In this study, we examined the cognitive processes and ocular behavior associated with on-going navigation strategy choice using a route learning paradigm that distinguishes between three different wayfinding strategies: an allocentric place strategy, and the egocentric associative cue and beacon response strategies. Participants approached intersections of a known route from a variety of directions, and were asked to indicate the direction in which the original route continued. Their responses in a subset of these test trials allowed the assessment of strategy choice over the course of six experimental blocks. The behavioral data revealed an initial maladaptive bias for a beacon response strategy, with shifts in favor of the optimal configuration place strategy occurring over the course of the experiment. Response time analysis suggests that the configuration strategy relied on spatial transformations applied to a viewpoint-dependent spatial representation, rather than direct access to an allocentric representation. Furthermore, pupillary measures reflected the employment of place and response strategies throughout the experiment, with increasing use of the more cognitively demanding configuration strategy associated with increases in pupil dilation. During test trials in which known intersections were approached from different directions, visual attention was directed to the landmark encoded during learning as well as the intended movement direction. Interestingly, the encoded landmark did not differ between the three navigation strategies, which is discussed in the context of initial strategy choice and the parallel acquisition of place and response knowledge.

  2. Research on the strategy of underwater united detection fusion and communication using multi-sensor

    NASA Astrophysics Data System (ADS)

    Xu, Zhenhua; Huang, Jianguo; Huang, Hai; Zhang, Qunfei

    2011-09-01

    In order to solve the distributed detection fusion problem of underwater target detection, when the signal to noise ratio (SNR) of the acoustic channel is low, a new strategy for united detection fusion and communication using multiple sensors was proposed. The performance of detection fusion was studied and compared based on the Neyman-Pearson principle when the binary phase shift keying (BPSK) and on-off keying (OOK) modes were used by the local sensors. The comparative simulation and analysis between the optimal likelihood ratio test and the proposed strategy was completed, and both the theoretical analysis and simulation indicate that using the proposed new strategy could improve the detection performance effectively. In theory, the proposed strategy of united detection fusion and communication is of great significance to the establishment of an underwater target detection system.

  3. Symbiosis-Based Alternative Learning Multi-Swarm Particle Swarm Optimization.

    PubMed

    Niu, Ben; Huang, Huali; Tan, Lijing; Duan, Qiqi

    2017-01-01

    Inspired by the ideas from the mutual cooperation of symbiosis in natural ecosystem, this paper proposes a new variant of PSO, named Symbiosis-based Alternative Learning Multi-swarm Particle Swarm Optimization (SALMPSO). A learning probability to select one exemplar out of the center positions, the local best position, and the historical best position including the experience of internal and external multiple swarms, is used to keep the diversity of the population. Two different levels of social interaction within and between multiple swarms are proposed. In the search process, particles not only exchange social experience with others that are from their own sub-swarms, but also are influenced by the experience of particles from other fellow sub-swarms. According to the different exemplars and learning strategy, this model is instantiated as four variants of SALMPSO and a set of 15 test functions are conducted to compare with some variants of PSO including 10, 30 and 50 dimensions, respectively. Experimental results demonstrate that the alternative learning strategy in each SALMPSO version can exhibit better performance in terms of the convergence speed and optimal values on most multimodal functions in our simulation.

  4. Lateral Flow Assay Based on Paper-Hydrogel Hybrid Material for Sensitive Point-of-Care Detection of Dengue Virus.

    PubMed

    Choi, Jane Ru; Yong, Kar Wey; Tang, Ruihua; Gong, Yan; Wen, Ting; Yang, Hui; Li, Ang; Chia, Yook Chin; Pingguan-Murphy, Belinda; Xu, Feng

    2017-01-01

    Paper-based devices have been broadly used for the point-of-care detection of dengue viral nucleic acids due to their simplicity, cost-effectiveness, and readily observable colorimetric readout. However, their moderate sensitivity and functionality have limited their applications. Despite the above-mentioned advantages, paper substrates are lacking in their ability to control fluid flow, in contrast to the flow control enabled by polymer substrates (e.g., agarose) with readily tunable pore size and porosity. Herein, taking the benefits from both materials, the authors propose a strategy to create a hybrid substrate by incorporating agarose into the test strip to achieve flow control for optimal biomolecule interactions. As compared to the unmodified test strip, this strategy allows sensitive detection of targets with an approximately tenfold signal improvement. Additionally, the authors showcase the potential of functionality improvement by creating multiple test zones for semi-quantification of targets, suggesting that the number of visible test zones is directly proportional to the target concentration. The authors further demonstrate the potential of their proposed strategy for clinical assessment by applying it to their prototype sample-to-result test strip to sensitively and semi-quantitatively detect dengue viral RNA from the clinical blood samples. This proposed strategy holds significant promise for detecting various targets for diverse future applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. An adaptive sharing elitist evolution strategy for multiobjective optimization.

    PubMed

    Costa, Lino; Oliveira, Pedro

    2003-01-01

    Almost all approaches to multiobjective optimization are based on Genetic Algorithms (GAs), and implementations based on Evolution Strategies (ESs) are very rare. Thus, it is crucial to investigate how ESs can be extended to multiobjective optimization, since they have, in the past, proven to be powerful single objective optimizers. In this paper, we present a new approach to multiobjective optimization, based on ESs. We call this approach the Multiobjective Elitist Evolution Strategy (MEES) as it incorporates several mechanisms, like elitism, that improve its performance. When compared with other algorithms, MEES shows very promising results in terms of performance.

  6. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  7. Optimal resource allocation strategy for two-layer complex networks

    NASA Astrophysics Data System (ADS)

    Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu

    2018-02-01

    We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.

  8. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  9. "RCL-Pooling Assay": A Simplified Method for the Detection of Replication-Competent Lentiviruses in Vector Batches Using Sequential Pooling.

    PubMed

    Corre, Guillaume; Dessainte, Michel; Marteau, Jean-Brice; Dalle, Bruno; Fenard, David; Galy, Anne

    2016-02-01

    Nonreplicative recombinant HIV-1-derived lentiviral vectors (LV) are increasingly used in gene therapy of various genetic diseases, infectious diseases, and cancer. Before they are used in humans, preparations of LV must undergo extensive quality control testing. In particular, testing of LV must demonstrate the absence of replication-competent lentiviruses (RCL) with suitable methods, on representative fractions of vector batches. Current methods based on cell culture are challenging because high titers of vector batches translate into high volumes of cell culture to be tested in RCL assays. As vector batch size and titers are continuously increasing because of the improvement of production and purification methods, it became necessary for us to modify the current RCL assay based on the detection of p24 in cultures of indicator cells. Here, we propose a practical optimization of this method using a pairwise pooling strategy enabling easier testing of higher vector inoculum volumes. These modifications significantly decrease material handling and operator time, leading to a cost-effective method, while maintaining optimal sensibility of the RCL testing. This optimized "RCL-pooling assay" ameliorates the feasibility of the quality control of large-scale batches of clinical-grade LV while maintaining the same sensitivity.

  10. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  11. Communication strategies for enhancing understanding of the behavioral implications of genetic and biomarker tests for disease risk: the role of coherence.

    PubMed

    Cameron, Linda D; Marteau, Theresa M; Brown, Paul M; Klein, William M P; Sherman, Kerry A

    2012-06-01

    Individuals frequently have difficulty understanding how behavior can reduce genetically-conferred risk for diseases such as colon cancer. With increasing opportunities to purchase genetic tests, communication strategies are needed for presenting information in ways that optimize comprehension and adaptive behavior. Using the Common-Sense Model, we tested the efficacy of a strategy for providing information about the relationships (links) among the physiological processes underlying disease risk and protective action on understanding, protective action motivations, and willingness to purchase tests. We tested the generalizability of the strategy's effects across varying risk levels, for genetic tests versus tests of a non-genetic biomarker, and when using graphic and numeric risk formats. In an internet-based experiment, 749 adults from four countries responded to messages about a hypothetical test for colon cancer risk. Messages varied by Risk-Action Link Information (provision or no provision of information describing how a low-fat diet reduces risk given positive results, indicating presence of a gene fault), Risk Increment (20%, 50%, or 80% risk given positive results), Risk Format (numeric or graphic presentation of risk increments), and Test Type (genetic or enzyme). Providing risk-action link information enhanced beliefs of coherence (understanding how a low-fat diet reduces risk) and response efficacy (low-fat diets effectively reduce risk) and lowered appraisals of anticipated risk of colon cancer given positive results. These effects held across risk increments, risk formats, and test types. For genetic tests, provision of risk-action link information reduced the amount individuals were willing to pay for testing. Brief messages explaining how action can reduce genetic and biomarker-detected risks can promote beliefs motivating protective action. By enhancing understanding of behavioral control, they may reduce the perceived value of genetic risk information.

  12. Developing a conservation strategy to maximize persistence of an endangered freshwater mussel species while considering management effectiveness and cost

    USGS Publications Warehouse

    Smith, David R.; McRae, Sarah E.; Augspurger, Tom; Ratcliffe, Judith A.; Nichols, Robert B.; Eads, Chris B.; Savidge, Tim; Bogan, Arthur E.

    2015-01-01

    We used a structured decision-making process to develop conservation strategies to increase persistence of Dwarf Wedgemussel (Alasmidonta heterodon) in North Carolina, USA, while accounting for uncertainty in management effectiveness and considering costs. Alternative conservation strategies were portfolios of management actions that differed by location of management actions on the landscape. Objectives of the conservation strategy were to maximize species persistence, maintain genetic diversity, maximize public support, and minimize management costs. We compared 4 conservation strategies: 1) the ‘status quo’ strategy represented current management, 2) the ‘protect the best’ strategy focused on protecting the best populations in the Tar River basin, 3) the ‘expand the distribution’ strategy focused on management of extant populations and establishment of new populations in the Neuse River basin, and 4) the ‘hybrid’ strategy combined elements of each strategy to balance conservation in the Tar and Neuse River basins. A population model informed requirements for population management, and experts projected performance of alternative strategies over a 20-y period. The optimal strategy depended on the relative value placed on competing objectives, which can vary among stakeholders. The protect the best and hybrid strategies were optimal across a wide range of relative values with 2 exceptions: 1) if minimizing management cost was of overriding concern, then status quo was optimal, or 2) if maximizing population persistence in the Neuse River basin was emphasized, then expand the distribution strategy was optimal. The optimal strategy was robust to uncertainty in management effectiveness. Overall, the structured decision process can help identify the most promising strategies for endangered species conservation that maximize conservation benefit given the constraint of limited funding.

  13. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  14. A guide to multi-objective optimization for ecological problems with an application to cackling goose management

    USGS Publications Warehouse

    Williams, Perry J.; Kendall, William L.

    2017-01-01

    Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.

  15. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  16. Intelligent fault recognition strategy based on adaptive optimized multiple centers

    NASA Astrophysics Data System (ADS)

    Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong

    2018-06-01

    For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.

  17. Long-Run Savings and Investment Strategy Optimization

    PubMed Central

    Gerrard, Russell; Guillén, Montserrat; Pérez-Marín, Ana M.

    2014-01-01

    We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration. PMID:24711728

  18. Long-run savings and investment strategy optimization.

    PubMed

    Gerrard, Russell; Guillén, Montserrat; Nielsen, Jens Perch; Pérez-Marín, Ana M

    2014-01-01

    We focus on automatic strategies to optimize life cycle savings and investment. Classical optimal savings theory establishes that, given the level of risk aversion, a saver would keep the same relative amount invested in risky assets at any given time. We show that, when optimizing lifecycle investment, performance and risk assessment have to take into account the investor's risk aversion and the maximum amount the investor could lose, simultaneously. When risk aversion and maximum possible loss are considered jointly, an optimal savings strategy is obtained, which follows from constant rather than relative absolute risk aversion. This result is fundamental to prove that if risk aversion and the maximum possible loss are both high, then holding a constant amount invested in the risky asset is optimal for a standard lifetime saving/pension process and outperforms some other simple strategies. Performance comparisons are based on downside risk-adjusted equivalence that is used in our illustration.

  19. Seeing light at the end of the tunnel: Positive prospective mental imagery and optimism in depression.

    PubMed

    Ji, Julie L; Holmes, Emily A; Blackwell, Simon E

    2017-01-01

    Optimism is associated with positive outcomes across many health domains, from cardiovascular disease to depression. However, we know little about cognitive processes underlying optimism in psychopathology. The present study tested whether the ability to vividly imagine positive events in one's future was associated with dispositional optimism in a sample of depressed adults. Cross-sectional and longitudinal analyses were conducted, using baseline (all participants, N=150) and follow-up data (participants in the control condition only, N=63) from a clinical trial (Blackwell et al., 2015). Vividness of positive prospective imagery, assessed on a laboratory-administered task at baseline, was significantly associated with both current optimism levels at baseline and future (seven months later) optimism levels, including when controlling for potential confounds. Even when depressed, those individuals able to envision a brighter future were more optimistic, and regained optimism more quickly over time, than those less able to do so at baseline. Strategies to increase the vividness of positive prospective imagery may aid development of mental health interventions to boost optimism. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. Active and Reactive Power Optimal Dispatch Associated with Load and DG Uncertainties in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.

    2017-05-01

    In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.

  1. A strategy to identify linker-based modules for the allosteric regulation of antibody-antigen binding affinities of different scFvs

    PubMed Central

    Thie, Holger

    2017-01-01

    ABSTRACT Antibody single-chain variable fragments (scFvs) are used in a variety of applications, such as for research, diagnosis and therapy. Essential for these applications is the extraordinary specificity, selectivity and affinity of antibody paratopes, which can also be used for efficient protein purification. However, this use is hampered by the high affinity for the protein to be purified because harsh elution conditions, which may impair folding, integrity or viability of the eluted biomaterials, are typically required. In this study, we developed a strategy to obtain structural elements that provide allosteric modulation of the affinities of different antibody scFvs for their antigen. To identify suitable allosteric modules, a complete set of cyclic permutations of calmodulin variants was generated and tested for modulation of the affinity when substituting the linker between VH and VL. Modulation of affinity induced by addition of different calmodulin-binding peptides at physiologic conditions was demonstrated for 5 of 6 tested scFvs of different specificities and antigens ranging from cell surface proteins to haptens. In addition, a variety of different modulator peptides were tested. Different structural solutions were found in respect of the optimal calmodulin permutation, the optimal peptide and the allosteric effect for scFvs binding to different antigen structures. Significantly, effective linker modules were identified for scFvs with both VH-VL and VL-VH architecture. The results suggest that this approach may offer a rapid, paratope-independent strategy to provide allosteric regulation of affinity for many other antibody scFvs. PMID:28055297

  2. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  3. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  4. Development of a model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States

    PubMed Central

    Smith, Rebecca L.; Schukken, Ynte H.; Lu, Zhao; Mitchell, Rebecca M.; Grohn, Yrjo T.

    2013-01-01

    Objective To develop a mathematical model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States and predict efficacy of the current national control strategy for tuberculosis in cattle. Design Stochastic simulation model. Sample Theoretical cattle herds in the United States. Procedures A model of within-herd M bovis transmission dynamics following introduction of 1 latently infected cow was developed. Frequency- and density-dependent transmission modes and 3 tuberculin-test based culling strategies (no test-based culling, constant (annual) testing with test-based culling, and the current strategy of slaughterhouse detection-based testing and culling) were investigated. Results were evaluated for 3 herd sizes over a 10-year period and validated via simulation of known outbreaks of M bovis infection. Results On the basis of 1,000 simulations (1000 herds each) at replacement rates typical for dairy cattle (0.33/y), median time to detection of M bovis infection in medium-sized herds (276 adult cattle) via slaughterhouse surveillance was 27 months after introduction, and 58% of these herds would spontaneously clear the infection prior to that time. Sixty-two percent of medium-sized herds without intervention and 99% of those managed with constant test-based culling were predicted to clear infection < 10 years after introduction. The model predicted observed outbreaks best for frequency-dependent transmission, and probability of clearance was most sensitive to replacement rate. Conclusions and Clinical Relevance Although modeling indicated the current national control strategy was sufficient for elimination of M bovis infection from dairy herds after detection, slaughterhouse surveillance was not sufficient to detect M bovis infection in all herds and resulted in subjectively delayed detection, compared with the constant testing method. Further research is required to economically optimize this strategy. PMID:23865885

  5. A particle swarm optimization variant with an inner variable learning strategy.

    PubMed

    Wu, Guohua; Pedrycz, Witold; Ma, Manhao; Qiu, Dishan; Li, Haifeng; Liu, Jin

    2014-01-01

    Although Particle Swarm Optimization (PSO) has demonstrated competitive performance in solving global optimization problems, it exhibits some limitations when dealing with optimization problems with high dimensionality and complex landscape. In this paper, we integrate some problem-oriented knowledge into the design of a certain PSO variant. The resulting novel PSO algorithm with an inner variable learning strategy (PSO-IVL) is particularly efficient for optimizing functions with symmetric variables. Symmetric variables of the optimized function have to satisfy a certain quantitative relation. Based on this knowledge, the inner variable learning (IVL) strategy helps the particle to inspect the relation among its inner variables, determine the exemplar variable for all other variables, and then make each variable learn from the exemplar variable in terms of their quantitative relations. In addition, we design a new trap detection and jumping out strategy to help particles escape from local optima. The trap detection operation is employed at the level of individual particles whereas the trap jumping out strategy is adaptive in its nature. Experimental simulations completed for some representative optimization functions demonstrate the excellent performance of PSO-IVL. The effectiveness of the PSO-IVL stresses a usefulness of augmenting evolutionary algorithms by problem-oriented domain knowledge.

  6. IDH mutation assessment of glioma using texture features of multimodal MR images

    NASA Astrophysics Data System (ADS)

    Zhang, Xi; Tian, Qiang; Wu, Yu-Xia; Xu, Xiao-Pan; Li, Bao-Juan; Liu, Yi-Xiong; Liu, Yang; Lu, Hong-Bing

    2017-03-01

    Purpose: To 1) find effective texture features from multimodal MRI that can distinguish IDH mutant and wild status, and 2) propose a radiomic strategy for preoperatively detecting IDH mutation patients with glioma. Materials and Methods: 152 patients with glioma were retrospectively included from the Cancer Genome Atlas. Corresponding T1-weighted image before- and post-contrast, T2-weighted image and fluid-attenuation inversion recovery image from the Cancer Imaging Archive were analyzed. Specific statistical tests were applied to analyze the different kind of baseline information of LrGG patients. Finally, 168 texture features were derived from multimodal MRI per patient. Then the support vector machine-based recursive feature elimination (SVM-RFE) and classification strategy was adopted to find the optimal feature subset and build the identification models for detecting the IDH mutation. Results: Among 152 patients, 92 and 60 were confirmed to be IDH-wild and mutant, respectively. Statistical analysis showed that the patients without IDH mutation was significant older than patients with IDH mutation (p<0.01), and the distribution of some histological subtypes was significant different between IDH wild and mutant groups (p<0.01). After SVM-RFE, 15 optimal features were determined for IDH mutation detection. The accuracy, sensitivity, specificity, and AUC after SVM-RFE and parameter optimization were 82.2%, 85.0%, 78.3%, and 0.841, respectively. Conclusion: This study presented a radiomic strategy for noninvasively discriminating IDH mutation of patients with glioma. It effectively incorporated kinds of texture features from multimodal MRI, and SVM-based classification strategy. Results suggested that features selected from SVM-RFE were more potential to identifying IDH mutation. The proposed radiomics strategy could facilitate the clinical decision making in patients with glioma.

  7. Model-Based Battery Management Systems: From Theory to Practice

    NASA Astrophysics Data System (ADS)

    Pathak, Manan

    Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.

  8. New Evidence for Strategic Differences between Static and Dynamic Search Tasks: An Individual Observer Analysis of Eye Movements

    PubMed Central

    Dickinson, Christopher A.; Zelinsky, Gregory J.

    2013-01-01

    Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers’ oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d′ values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. PMID:23372555

  9. The WOMEN study: what is the optimal method for ischemia evaluation in women? A multi-center, prospective, randomized study to establish the optimal method for detection of coronary artery disease (CAD) risk in women at an intermediate-high pretest likelihood of CAD: study design.

    PubMed

    Mieres, Jennifer H; Shaw, Leslee J; Hendel, Robert C; Heller, Gary V

    2009-01-01

    Coronary artery disease remains the leading cause of morbidity and mortality in women. The optimal non-invasive test for evaluation of ischemic heart disease in women is unknown. Although current guidelines support the choice of the exercise tolerance test (ETT) as a first line test for women with a normal baseline ECG and adequate exercise capabilities, supportive data for this recommendation are controversial. The what is the optimal method for ischemia evaluation in women? (WOMEN) study was designed to determine the optimal non-invasive strategy for CAD risk detection of intermediate and high risk women presenting with chest pain or equivalent symptoms suggestive of ischemic heart disease. The study will prospectively compare the 2-year event rates in women capable of performing exercise treadmill testing or Tc-99 m tetrofosmin SPECT myocardial perfusion imaging (MPI). The study will enroll women presenting for the evaluation of chest pain or anginal equivalent symptoms who are capable of performing >5 METs of exercise while at intermediate-high pretest risk for ischemic heart disease who will be randomized to either ETT testing alone or with Tc-99 m tetrofosmin SPECT MPI. The null hypothesis for this project is that the exercise ECG has the same negative predictive value for risk detection as gated myocardial perfusion SPECT in women. The primary aim is to compare 2-year cardiac event rates in women randomized to SPECT MPI to those randomized to ETT. The WOMEN study seeks to provide objective information for guidelines for the evaluation of symptomatic women with an intermediate-high likelihood for CAD.

  10. Convergent evolution of vascular optimization in kelp (Laminariales).

    PubMed

    Drobnitch, Sarah Tepler; Jensen, Kaare H; Prentice, Paige; Pittermann, Jarmila

    2015-10-07

    Terrestrial plants and mammals, although separated by a great evolutionary distance, have each arrived at a highly conserved body plan in which universal allometric scaling relationships govern the anatomy of vascular networks and key functional metabolic traits. The universality of allometric scaling suggests that these phyla have each evolved an 'optimal' transport strategy that has been overwhelmingly adopted by extant species. To truly evaluate the dominance and universality of vascular optimization, however, it is critical to examine other, lesser-known, vascularized phyla. The brown algae (Phaeophyceae) are one such group--as distantly related to plants as mammals, they have convergently evolved a plant-like body plan and a specialized phloem-like transport network. To evaluate possible scaling and optimization in the kelp vascular system, we developed a model of optimized transport anatomy and tested it with measurements of the giant kelp, Macrocystis pyrifera, which is among the largest and most successful of macroalgae. We also evaluated three classical allometric relationships pertaining to plant vascular tissues with a diverse sampling of kelp species. Macrocystis pyrifera displays strong scaling relationships between all tested vascular parameters and agrees with our model; other species within the Laminariales display weak or inconsistent vascular allometries. The lack of universal scaling in the kelps and the presence of optimized transport anatomy in M. pyrifera raises important questions about the evolution of optimization and the possible competitive advantage conferred by optimized vascular systems to multicellular phyla. © 2015 The Author(s).

  11. Optimal In-Hospital and Discharge Medical Therapy in Acute Coronary Syndromes in Kerala: Results from the Kerala ACS Registry

    PubMed Central

    Huffman, Mark D; Prabhakaran, Dorairaj; Abraham, AK; Krishnan, Mangalath Narayanan; Nambiar, C. Asokan; Mohanan, Padinhare Purayil

    2013-01-01

    Background In-hospital and post-discharge treatment rates for acute coronary syndrome (ACS) remain low in India. However, little is known about the prevalence and predictors of the package of optimal ACS medical care in India. Our objective was to define the prevalence, predictors, and impact of optimal in-hospital and discharge medical therapy in the Kerala ACS Registry of 25,718 admissions. Methods and Results We defined optimal in-hospital ACS medical therapy as receiving the following five medications: aspirin, clopidogrel, heparin, beta-blocker, and statin. We defined optimal discharge ACS medical therapy as receiving all of the above therapies except heparin. Comparisons by optimal vs. non-optimal ACS care were made via Student’s t test for continuous variables and chi-square test for categorical variables. We created random effects logistic regression models to evaluate the association between GRACE risk score variables and optimal in-hospital or discharge medical therapy. Optimal in-hospital and discharge medical care was delivered in 40% and 46% of admissions, respectively. Wide variability in both in-hospital and discharge medical care was present with few hospitals reaching consistently high (>90%) levels. Patients receiving optimal in-hospital medical therapy had an adjusted OR (95%CI)=0.93 (0.71, 1.22) for in-hospital death and an adjusted OR (95%CI)=0.79 (0.63, 0.99) for MACE. Patients who received optimal in-hospital medical care were far more likely to receive optimal discharge care (adjusted OR [95%CI]=10.48 [9.37, 11.72]). Conclusions Strategies to improve in-hospital and discharge medical therapy are needed to improve local process-of-care measures and improve ACS outcomes in Kerala. PMID:23800985

  12. Power Control for Direct-Driven Permanent Magnet Wind Generator System with Battery Storage

    PubMed Central

    Guang, Chu Xiao; Ying, Kong

    2014-01-01

    The objective of this paper is to construct a wind generator system (WGS) loss model that addresses the loss of the wind turbine and the generator. It aims to optimize the maximum effective output power and turbine speed. Given that the wind generator system has inertia and is nonlinear, the dynamic model of the wind generator system takes the advantage of the duty of the Buck converter and employs feedback linearization to design the optimized turbine speed tracking controller and the load power controller. According to that, this paper proposes a dual-mode dynamic coordination strategy based on the auxiliary load to reduce the influence of mode conversion on the lifetime of the battery. Optimized speed and power rapid tracking as well as the reduction of redundant power during mode conversion have gone through the test based on a 5 kW wind generator system test platform. The generator output power as the capture target has also been proved to be efficient. PMID:25050405

  13. Power control for direct-driven permanent magnet wind generator system with battery storage.

    PubMed

    Guang, Chu Xiao; Ying, Kong

    2014-01-01

    The objective of this paper is to construct a wind generator system (WGS) loss model that addresses the loss of the wind turbine and the generator. It aims to optimize the maximum effective output power and turbine speed. Given that the wind generator system has inertia and is nonlinear, the dynamic model of the wind generator system takes the advantage of the duty of the Buck converter and employs feedback linearization to design the optimized turbine speed tracking controller and the load power controller. According to that, this paper proposes a dual-mode dynamic coordination strategy based on the auxiliary load to reduce the influence of mode conversion on the lifetime of the battery. Optimized speed and power rapid tracking as well as the reduction of redundant power during mode conversion have gone through the test based on a 5 kW wind generator system test platform. The generator output power as the capture target has also been proved to be efficient.

  14. Optimal single-shot strategies for discrimination of quantum measurements

    NASA Astrophysics Data System (ADS)

    Sedlák, Michal; Ziman, Mário

    2014-11-01

    We study discrimination of m quantum measurements in the scenario when the unknown measurement with n outcomes can be used only once. We show that ancilla-assisted discrimination procedures provide a nontrivial advantage over simple (ancilla-free) schemes for perfect distinguishability and we prove that inevitably m ≤n . We derive necessary and sufficient conditions of perfect distinguishability of general binary measurements. We show that the optimization of the discrimination of projective qubit measurements and their mixtures with white noise is equivalent to the discrimination of specific quantum states. In particular, the optimal protocol for discrimination of projective qubit measurements with fixed failure rate (exploiting maximally entangled test state) is described. While minimum-error discrimination of two projective qubit measurements can be realized without any need of entanglement, we show that discrimination of three projective qubit measurements requires a bipartite probe state. Moreover, when the measurements are not projective, the non-maximally entangled test states can outperform the maximally entangled ones. Finally, we rephrase the unambiguous discrimination of measurements as quantum key distribution protocol.

  15. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE PAGES

    Luo, Na; Hong, Tianzhen; Li, Hui; ...

    2017-07-25

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  16. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Na; Hong, Tianzhen; Li, Hui

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  17. Adaptive management for subsurface pressure and plume control in application to geological CO2 storage

    NASA Astrophysics Data System (ADS)

    Gonzalez-Nicolas, A.; Cihan, A.; Birkholzer, J. T.; Petrusak, R.; Zhou, Q.; Riestenberg, D. E.; Trautz, R. C.; Godec, M.

    2016-12-01

    Industrial-scale injection of CO2 into the subsurface can cause reservoir pressure increases that must be properly controlled to prevent any potential environmental impact. Excessive pressure buildup in reservoir may result in ground water contamination stemming from leakage through conductive pathways, such as improperly plugged abandoned wells or distant faults, and the potential for fault reactivation and possibly seal breaching. Brine extraction is a viable approach for managing formation pressure, effective stress, and plume movement during industrial-scale CO2 injection projects. The main objectives of this study are to investigate suitable different pressure management strategies involving active brine extraction and passive pressure relief wells. Adaptive optimized management of CO2 storage projects utilizes the advanced automated optimization algorithms and suitable process models. The adaptive management integrates monitoring, forward modeling, inversion modeling and optimization through an iterative process. In this study, we employ an adaptive framework to understand primarily the effects of initial site characterization and frequency of the model update (calibration) and optimization calculations for controlling extraction rates based on the monitoring data on the accuracy and the success of the management without violating pressure buildup constraints in the subsurface reservoir system. We will present results of applying the adaptive framework to test appropriateness of different management strategies for a realistic field injection project.

  18. WE-DE-201-01: BEST IN PHYSICS (THERAPY): A Fast Multi-Target Inverse Treatment Planning Strategy Optimizing Dosimetric Measures for High-Dose-Rate (HDR) Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guthier, C; University Medical Center Mannheim, Mannheim; Harvard Medical School, Boston, MA

    Purpose: Inverse treatment planning (ITP) for interstitial HDR brachytherapy of gynecologic cancers seeks to maximize coverage of the clinical target volumes (tumor and vagina) while respecting dose-volume-histogram related dosimetric measures (DMs) for organs at risk (OARs). Commercially available ITP tools do not support DM-based planning because it is computationally too expensive to solve. In this study we present a novel approach that allows fast ITP for gynecologic cancers based on DMs for the first time. Methods: This novel strategy is an optimization model based on a smooth DM-based objective function. The smooth approximation is achieved by utilizing a logistic functionmore » for the evaluation of DMs. The resulting nonconvex and constrained optimization problem is then optimized with a BFGS algorithm. The model was evaluated using the implant geometry extracted from 20 patient treatment plans under an IRB-approved retrospective study. For each plan, the final DMs were evaluated and compared to the original clinical plans. The CTVs were the contoured tumor volume and the contoured surface of the vagina. Statistical significance was evaluated with a one-sided paired Wilcoxon signed-rank test. Results: As did the clinical plans, all generated plans fulfilled the defined DMs for OARs. The proposed strategy showed a statistically significant improvement (p<0.001) in coverage of the tumor and vagina, with absolute improvements of related DMs of (6.9 +/− 7.9)% and (28.2 +/− 12.0)%, respectively. This was achieved with a statistically significant (p<0.01) decrease of the high-dose-related DM for the tumor. The runtime of the optimization was (2.3 +/− 2.0) seconds. Conclusion: We demonstrated using clinical data that our novel approach allows rapid DM-based optimization with improved coverage of CTVs with fewer hot spots. Being up to three orders of magnitude faster than the current clinical practice, the method dramatically shortens planning time.« less

  19. Intelligent Space Tube Optimization for speeding ground water remedial design.

    PubMed

    Kalwij, Ineke M; Peralta, Richard C

    2008-01-01

    An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.

  20. Optimal strategy for controlling the spread of Plasmodium Knowlesi malaria: Treatment and culling

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Hasan, Yahya Abu; Abdullah, Farah Aini

    2015-05-01

    Plasmodium Knowlesi malaria is a parasitic mosquito-borne disease caused by a eukaryotic protist of genus Plasmodium Knowlesi transmitted by mosquito, Anopheles leucosphyrus to human and macaques. We developed and analyzed a deterministic Mathematical model for the transmission of Plasmodium Knowlesi malaria in human and macaques. The optimal control theory is applied to investigate optimal strategies for controlling the spread of Plasmodium Knowlesi malaria using treatment and culling as control strategies. The conditions for optimal control of the Plasmodium Knowlesi malaria are derived using Pontryagin's Maximum Principle. Finally, numerical simulations suggested that the combination of the control strategies is the best way to control the disease in any community.

  1. Optimal Control Strategy Design Based on Dynamic Programming for a Dual-Motor Coupling-Propulsion System

    PubMed Central

    Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui

    2014-01-01

    A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814

  2. Optimal control strategy design based on dynamic programming for a dual-motor coupling-propulsion system.

    PubMed

    Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui

    2014-01-01

    A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch.

  3. Optimal investment strategies and hedging of derivatives in the presence of transaction costs (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Muratore-Ginanneschi, Paolo

    2005-05-01

    Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.

  4. A Simulation-Optimization Model for the Management of Seawater Intrusion

    NASA Astrophysics Data System (ADS)

    Stanko, Z.; Nishikawa, T.

    2012-12-01

    Seawater intrusion is a common problem in coastal aquifers where excessive groundwater pumping can lead to chloride contamination of a freshwater resource. Simulation-optimization techniques have been developed to determine optimal management strategies while mitigating seawater intrusion. The simulation models are often density-independent groundwater-flow models that may assume a sharp interface and/or use equivalent freshwater heads. The optimization methods are often linear-programming (LP) based techniques that that require simplifications of the real-world system. However, seawater intrusion is a highly nonlinear, density-dependent flow and transport problem, which requires the use of nonlinear-programming (NLP) or global-optimization (GO) techniques. NLP approaches are difficult because of the need for gradient information; therefore, we have chosen a GO technique for this study. Specifically, we have coupled a multi-objective genetic algorithm (GA) with a density-dependent groundwater-flow and transport model to simulate and identify strategies that optimally manage seawater intrusion. GA is a heuristic approach, often chosen when seeking optimal solutions to highly complex and nonlinear problems where LP or NLP methods cannot be applied. The GA utilized in this study is the Epsilon-Nondominated Sorted Genetic Algorithm II (ɛ-NSGAII), which can approximate a pareto-optimal front between competing objectives. This algorithm has several key features: real and/or binary variable capabilities; an efficient sorting scheme; preservation and diversity of good solutions; dynamic population sizing; constraint handling; parallelizable implementation; and user controlled precision for each objective. The simulation model is SEAWAT, the USGS model that couples MODFLOW with MT3DMS for variable-density flow and transport. ɛ-NSGAII and SEAWAT were efficiently linked together through a C-Fortran interface. The simulation-optimization model was first tested by using a published density-independent flow model test case that was originally solved using a sequential LP method with the USGS's Ground-Water Management Process (GWM). For the problem formulation, the objective is to maximize net groundwater extraction, subject to head and head-gradient constraints. The decision variables are pumping rates at fixed wells and the system's state is represented with freshwater hydraulic head. The results of the proposed algorithm were similar to the published results (within 1%); discrepancies may be attributed to differences in the simulators and inherent differences between LP and GA. The GWM test case was then extended to a density-dependent flow and transport version. As formulated, the optimization problem is infeasible because of the density effects on hydraulic head. Therefore, the sum of the squared constraint violation (SSC) was used as a second objective. The result is a pareto curve showing optimal pumping rates versus the SSC. Analysis of this curve indicates that a similar net-extraction rate to the test case can be obtained with a minor violation in vertical head-gradient constraints. This study shows that a coupled ɛ-NSGAII/SEAWAT model can be used for the management of groundwater seawater intrusion. In the future, the proposed methodology will be applied to a real-world seawater intrusion and resource management problem for Santa Barbara, CA.

  5. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    NASA Astrophysics Data System (ADS)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  6. The GTC: a convenient test bench for ELT demonstrations

    NASA Astrophysics Data System (ADS)

    Rodriguez Espinosa, Jose M.; Hammersley, Peter L.; Martinez-Roger, Carlos

    2004-07-01

    The Gran Telescopio Canarias (GTC) is, being assembled at the Observatorio del Roque de los Muchachos (ORM) in the island of La Palma. First light is expected for early 2005 with the first science observations late in 2005. The GTC, being a segmented primary mirror telescope, could be employed for testing several technological aspects relevant to the future generation of Extremely Large Telescopes (ELT). In the short term, the mass production of aespheric mirror segments can be examined in detail and improvements made along the way, or planned for the future. Indeed the GTC segments are now entering into a chain production scheme. Later on, different strategies for the control aspects of the primary mirror can be explored to optimize the optical performance of segmented telescopes. Moreover, the entire GTC active optics can offer a learning tool for testing various strategies and their application to ELTs.

  7. Autoimmune diagnostics: the technology, the strategy and the clinical governance.

    PubMed

    Bizzaro, Nicola; Tozzoli, Renato; Villalta, Danilo

    2015-02-01

    In recent years, there has been a profound change in autoimmune diagnostics. From long, tiring and inaccurate manual methods, the art of diagnostics has turned to modern, rapid and automated technology. New antibody tests have been developed, and almost all autoimmune diseases now have some specific diagnostic markers. The current need to make the most of available economic and human resources has led to the production of diagnostic algorithms and guidelines designated for optimal strategic use of the tests and to increase the diagnostic appropriateness. An important role in this scenario was assumed by the laboratory autoimmunologist, whose task is not only to govern the analytical phase, but also to help clinicians in correctly choosing the most suitable test for each clinical situation and provide consultancy support. In this review, we summarize recent advances in technology, describe the diagnostic strategies and highlight the current role of the laboratory autoimmunologist in the clinical governance of autoimmune diagnostics.

  8. Higher criticism approach to detect rare variants using whole genome sequencing data

    PubMed Central

    2014-01-01

    Because of low statistical power of single-variant tests for whole genome sequencing (WGS) data, the association test for variant groups is a key approach for genetic mapping. To address the features of sparse and weak genetic effects to be detected, the higher criticism (HC) approach has been proposed and theoretically has proven optimal for detecting sparse and weak genetic effects. Here we develop a strategy to apply the HC approach to WGS data that contains rare variants as the majority. By using Genetic Analysis Workshop 18 "dose" genetic data with simulated phenotypes, we assess the performance of HC under a variety of strategies for grouping variants and collapsing rare variants. The HC approach is compared with the minimal p-value method and the sequence kernel association test. The results show that the HC approach is preferred for detecting weak genetic effects. PMID:25519367

  9. Modeling joint restoration strategies for interdependent infrastructure systems.

    PubMed

    Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.

  10. Use and optimization of a dual-flowrate loading strategy to maximize throughput in protein-a affinity chromatography.

    PubMed

    Ghose, Sanchayita; Nagrath, Deepak; Hubbard, Brian; Brooks, Clayton; Cramer, Steven M

    2004-01-01

    The effect of an alternate strategy employing two different flowrates during loading was explored as a means of increasing system productivity in Protein-A chromatography. The effect of such a loading strategy was evaluated using a chromatographic model that was able to accurately predict experimental breakthrough curves for this Protein-A system. A gradient-based optimization routine is carried out to establish the optimal loading conditions (initial and final flowrates and switching time). The two-step loading strategy (using a higher flowrate during the initial stages followed by a lower flowrate) was evaluated for an Fc-fusion protein and was found to result in significant improvements in process throughput. In an extension of this optimization routine, dynamic loading capacity and productivity were simultaneously optimized using a weighted objective function, and this result was compared to that obtained with the single flowrate. Again, the dual-flowrate strategy was found to be superior.

  11. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  12. Optimal teaching strategy in periodic impulsive knowledge dissemination system.

    PubMed

    Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang; Liu, Jian-Guo

    2017-01-01

    Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination.

  13. Optimal teaching strategy in periodic impulsive knowledge dissemination system

    PubMed Central

    Liu, Dan-Qing; Wu, Zhen-Qiang; Wang, Yu-Xin; Guo, Qiang

    2017-01-01

    Accurately describing the knowledge dissemination process is significant to enhance the performance of personalized education. In this study, considering the effect of periodic teaching activities on the learning process, we propose a periodic impulsive knowledge dissemination system to regenerate the knowledge dissemination process. Meanwhile, we put forward learning effectiveness which is an outcome of a trade-off between the benefits and costs raised by knowledge dissemination as objective function. Further, we investigate the optimal teaching strategy which can maximize learning effectiveness, to obtain the optimal effect of knowledge dissemination affected by the teaching activities. We solve this dynamic optimization problem by optimal control theory and get the optimization system. At last we numerically solve this system in several practical examples to make the conclusions intuitive and specific. The optimal teaching strategy proposed in this paper can be applied widely in the optimization problem of personal education and beneficial for enhancing the effect of knowledge dissemination. PMID:28665961

  14. Deficit of state-dependent risk attitude modulation in gambling disorder

    PubMed Central

    Fujimoto, A; Tsurumi, K; Kawada, R; Murao, T; Takeuchi, H; Murai, T; Takahashi, H

    2017-01-01

    Gambling disorder (GD) is often considered as a problem of trait-like risk preference. However, the symptoms of GD cannot be fully understood by this trait view. In the present study, we hypothesized that GD patients also had problem with a flexible control of risk attitude (state-dependent strategy optimization), and aimed to investigate the mechanisms underlying abnormal risk-taking of GD. To address this issue, we tested GD patients without comorbidity (GD group: n=21) and age-matched healthy control participants (HC group: n=29) in a multi-step gambling task, in which participants needed to clear ‘block quota' (required units to clear a block, 1000–7000 units) in 20 choices, and conducted a task-functional magnetic resonance imaging (fMRI) experiment. Behavioral analysis indeed revealed a less flexible risk-attitude change in the GD group; the GD group failed to avoid risky choice in a specific quota range (low-quota condition), in which risky strategy was not optimal to solve the quota. Accordingly, fMRI analysis highlighted diminished functioning of the dorsolateral prefrontal cortex (dlPFC), which has been heavily implicated in cognitive flexibility. To our knowledge, the present study provided the first empirical evidence of a deficit of state-dependent strategy optimization in GD. Focusing on flexible control of risk attitude under quota may contribute to a better understanding of the psychopathology of GDs. PMID:28375207

  15. Deficit of state-dependent risk attitude modulation in gambling disorder.

    PubMed

    Fujimoto, A; Tsurumi, K; Kawada, R; Murao, T; Takeuchi, H; Murai, T; Takahashi, H

    2017-04-04

    Gambling disorder (GD) is often considered as a problem of trait-like risk preference. However, the symptoms of GD cannot be fully understood by this trait view. In the present study, we hypothesized that GD patients also had problem with a flexible control of risk attitude (state-dependent strategy optimization), and aimed to investigate the mechanisms underlying abnormal risk-taking of GD. To address this issue, we tested GD patients without comorbidity (GD group: n=21) and age-matched healthy control participants (HC group: n=29) in a multi-step gambling task, in which participants needed to clear 'block quota' (required units to clear a block, 1000-7000 units) in 20 choices, and conducted a task-functional magnetic resonance imaging (fMRI) experiment. Behavioral analysis indeed revealed a less flexible risk-attitude change in the GD group; the GD group failed to avoid risky choice in a specific quota range (low-quota condition), in which risky strategy was not optimal to solve the quota. Accordingly, fMRI analysis highlighted diminished functioning of the dorsolateral prefrontal cortex (dlPFC), which has been heavily implicated in cognitive flexibility. To our knowledge, the present study provided the first empirical evidence of a deficit of state-dependent strategy optimization in GD. Focusing on flexible control of risk attitude under quota may contribute to a better understanding of the psychopathology of GDs.

  16. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    NASA Astrophysics Data System (ADS)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  17. Microgrids and distributed generation systems: Control, operation, coordination and planning

    NASA Astrophysics Data System (ADS)

    Che, Liang

    Distributed Energy Resources (DERs) which include distributed generations (DGs), distributed energy storage systems, and adjustable loads are key components in microgrid operations. A microgrid is a small electric power system integrated with on-site DERs to serve all or some portion of the local load and connected to the utility grid through the point of common coupling (PCC). Microgrids can operate in both grid-connected mode and island mode. The structure and components of hierarchical control for a microgrid at Illinois Institute of Technology (IIT) are discussed and analyzed. Case studies would address the reliable and economic operation of IIT microgrid. The simulation results of IIT microgrid operation demonstrate that the hierarchical control and the coordination strategy of distributed energy resources (DERs) is an effective way of optimizing the economic operation and the reliability of microgrids. The benefits and challenges of DC microgrids are addressed with a DC model for the IIT microgrid. We presented the hierarchical control strategy including the primary, secondary, and tertiary controls for economic operation and the resilience of a DC microgrid. The simulation results verify that the proposed coordinated strategy is an effective way of ensuring the resilient response of DC microgrids to emergencies and optimizing their economic operation at steady state. The concept and prototype of a community microgrid that interconnecting multiple microgrids in a community are proposed. Two works are conducted. For the coordination, novel three-level hierarchical coordination strategy to coordinate the optimal power exchanges among neighboring microgrids is proposed. For the planning, a multi-microgrid interconnection planning framework using probabilistic minimal cut-set (MCS) based iterative methodology is proposed for enhancing the economic, resilience, and reliability signals in multi-microgrid operations. The implementation of high-reliability microgrids requires proper protection schemes that effectively function in both grid-connected and island modes. This chapter presents a communication-assisted four-level hierarchical protection strategy for high-reliability microgrids, and tests the proposed protection strategy based on a loop structured microgrid. The simulation results demonstrate the proposed strategy to be an effective and efficient option for microgrid protection. Additionally, microgrid topology ought to be optimally planned. To address the microgrid topology planning, a graph-partitioning and integer-programming integrated methodology is proposed. This work is not included in the dissertation. Interested readers can refer to our related publication.

  18. Health benefit modelling and optimization of vehicular pollution control strategies

    NASA Astrophysics Data System (ADS)

    Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra

    2012-12-01

    This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific control strategies with maximization of social benefits, when these strategies are applied simultaneously.

  19. Inverse modeling of rainfall infiltration with a dual permeability approach using different matrix-fracture coupling variants.

    NASA Astrophysics Data System (ADS)

    Blöcher, Johanna; Kuraz, Michal

    2017-04-01

    In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.

  20. Limitations and mechanisms influencing the migratory performance of soaring birds

    Treesearch

    Tricia A. Miller; Brooks Robert P.; Michael J. Lanzone; David Brandes; Jeff Cooper; Junior A. Tremblay; Jay Wilhelm; Adam Duerr; Todd E. Katzner

    2016-01-01

    Migration is costly in terms of time, energy and safety. Optimal migration theory suggests that individual migratory birds will choose between these three costs depending on their motivation and available resources. To test hypotheses about use of migratory strategies by large soaring birds, we used GPS telemetry to track 18 adult, 13 sub-adult and 15 juvenile Golden...

  1. Variable fidelity robust optimization of pulsed laser orbital debris removal under epistemic uncertainty

    NASA Astrophysics Data System (ADS)

    Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan

    2016-04-01

    A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.

  2. Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card

    NASA Astrophysics Data System (ADS)

    Jiang, Jinpeng; Zhu, Peimin

    2018-05-01

    Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.

  3. Impact of respiratory motion on worst-case scenario optimized intensity modulated proton therapy for lung cancers.

    PubMed

    Liu, Wei; Liao, Zhongxing; Schild, Steven E; Liu, Zhong; Li, Heng; Li, Yupeng; Park, Peter C; Li, Xiaoqiang; Stoker, Joshua; Shen, Jiajian; Keole, Sameer; Anand, Aman; Fatyga, Mirek; Dong, Lei; Sahoo, Narayan; Vora, Sujay; Wong, William; Zhu, X Ronald; Bues, Martin; Mohan, Radhe

    2015-01-01

    We compared conventionally optimized intensity modulated proton therapy (IMPT) treatment plans against worst-case scenario optimized treatment plans for lung cancer. The comparison of the 2 IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient setup, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. For each of the 9 lung cancer cases, 2 treatment plans were created that accounted for treatment uncertainties in 2 different ways. The first used the conventional method: delivery of prescribed dose to the planning target volume that is geometrically expanded from the internal target volume (ITV). The second used a worst-case scenario optimization scheme that addressed setup and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of changes in patient anatomy attributable to respiratory motion were investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the 2 groups were compared with 2-sided paired Student t tests. Without respiratory motion considered, we affirmed that worst-case scenario optimization is superior to planning target volume-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, worst-case scenario optimization still achieved more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality (D95% ITV, 96.6% vs 96.1% [P = .26]; D5%- D95% ITV, 10.0% vs 12.3% [P = .082]; D1% spinal cord, 31.8% vs 36.5% [P = .035]). Worst-case scenario optimization led to superior solutions for lung IMPT. Despite the fact that worst-case scenario optimization did not explicitly account for respiratory motion, it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  4. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Schild, S; Bues, M

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from themore » internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly account for respiratory motion it produced motion-resistant treatment plans. However, further research is needed to incorporate respiratory motion into IMPT robust optimization.« less

  5. When to throw the switch: The adaptiveness of modifying emotion regulation strategies based on affective and physiological feedback.

    PubMed

    Birk, Jeffrey L; Bonanno, George A

    2016-08-01

    Particular emotion regulation (ER) strategies are beneficial in certain contexts, but little is known about the adaptiveness of switching strategies after implementing an initial strategy. Research and theory on regulatory flexibility suggest that people switch strategies dynamically and that internal states provide feedback indicating when switches are appropriate. Frequent switching may predict positive outcomes among people who respond to this feedback. We investigated whether internal feedback (particularly corrugator activity, heart rate, or subjective negative intensity) guides people to switch to an optimal (i.e., distraction) but not nonoptimal (i.e., reappraisal) strategy for regulating strong emotion. We also tested whether switching frequency and responsiveness to internal feedback (RIF) together predict well-being. While attempting to regulate emotion elicited by unpleasant pictures, participants could switch to an optimal (Study 1; reappraisal-to-distraction order; N = 90) or nonoptimal (Study 2; distraction-to-reappraisal order; N = 95) strategy for high-arousal emotion. A RIF score for each emotion measure indexed the relative strength of emotion during the initial phase for trials on which participants later switched strategies. As hypothesized, negative intensity, corrugator activity, and the magnitude of heart rate deceleration during this early phase were higher on switch than maintain trials in Study 1 only. Critically, in Study 1 only, greater switching frequency predicted higher and lower life satisfaction for participants with high and low corrugator RIF, respectively, even after controlling for reappraisal success. Individual differences in RIF may contribute to subjective well-being provided that the direction of strategy switching aligns well with regulatory preferences for high emotion. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Effects of strategy on visual working memory capacity

    PubMed Central

    Bengson, Jesse J.; Luck, Steven J.

    2015-01-01

    Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity. PMID:26139356

  7. Effects of strategy on visual working memory capacity.

    PubMed

    Bengson, Jesse J; Luck, Steven J

    2016-02-01

    Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity.

  8. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  9. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  10. The key role of extinction learning in anxiety disorders: behavioral strategies to enhance exposure-based treatments.

    PubMed

    Pittig, Andre; van den Berg, Linda; Vervliet, Bram

    2016-01-01

    Extinction learning is a major mechanism for fear reduction by means of exposure. Current research targets innovative strategies to enhance fear extinction and thereby optimize exposure-based treatments for anxiety disorders. This selective review updates novel behavioral strategies that may provide cutting-edge clinical implications. Recent studies provide further support for two types of enhancement strategies. Procedural enhancement strategies implemented during extinction training translate to how exposure exercises may be conducted to optimize fear extinction. These strategies mostly focus on a maximized violation of dysfunctional threat expectancies and on reducing context and stimulus specificity of extinction learning. Flanking enhancement strategies target periods before and after extinction training and inform optimal preparation and post-processing of exposure exercises. These flanking strategies focus on the enhancement of learning in general, memory (re-)consolidation, and memory retrieval. Behavioral strategies to enhance fear extinction may provide powerful clinical applications to further maximize the efficacy of exposure-based interventions. However, future replications, mechanistic examinations, and translational studies are warranted to verify long-term effects and naturalistic utility. Future directions also comprise the interplay of optimized fear extinction with (avoidance) behavior and motivational antecedents of exposure.

  11. Cleanser, Detergent, Personal Care Product, and Pretreatment Evaluation

    NASA Technical Reports Server (NTRS)

    Adam, Niklas; Carrier, Chris; Vega, Leticia; Casteel, Michael; Verostko, chuck; Pickering, Karen

    2011-01-01

    The purpose of the Cleanser, Detergent, Personal Care Product, and Pretreatment Evaluation & Selection task is to identify the optimal combination of personal hygiene products, crew activities, and pretreatment strategies to provide the crew with sustainable life support practices and a comfortable habitat. Minimal energy, mass, and crew time inputs are desired to recycle wastewater during long duration missions. This document will provide a brief background on the work this past year supporting the ELS Distillation Comparison Test, issues regarding use of the hygiene products originally chosen for the test, methods and results used to select alternative products, and lessons learned from testing.

  12. Cleanser, Detergent, Personal Care Product Pretreatment Evaluation

    NASA Technical Reports Server (NTRS)

    Adam, Niklas

    2010-01-01

    The purpose of the Cleanser, Detergent, Personal Care Product, and Pretreatment Evaluation & Selection task is to identify the optimal combination of personal hygiene products, crew activities, and pretreatment strategies to provide the crew with sustainable life support practices and a comfortable habitat. Minimal energy, mass, and crew time inputs are desired to recycle wastewater during long duration missions. This document will provide a brief background on the work this past year supporting the ELS Distillation Comparison Test, issues regarding use of the hygiene products originally chosen for the test, methods and results used to select alternative products, and lessons learned from testing.

  13. Optimization of European call options considering physical delivery network and reservoir operation rules

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.

    2011-10-01

    This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.

  14. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  15. Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills.

    PubMed

    Vine, Samuel J; Masters, Rich S W; McGrath, John S; Bright, Elizabeth; Wilson, Mark R

    2012-07-01

    Previous research has demonstrated that trainees can be taught (via explicit verbal instruction) to adopt the gaze strategies of expert laparoscopic surgeons. The current study examined a software template designed to guide trainees to adopt expert gaze control strategies passively, without being provided with explicit instructions. We examined 27 novices (who had no laparoscopic training) performing 50 learning trials of a laparoscopic training task in either a discovery-learning (DL) group or a gaze-training (GT) group while wearing an eye tracker to assess gaze control. The GT group performed trials using a surgery-training template (STT); software that is designed to guide expert-like gaze strategies by highlighting the key locations on the monitor screen. The DL group had a normal, unrestricted view of the scene on the monitor screen. Both groups then took part in a nondelayed retention test (to assess learning) and a stress test (under social evaluative threat) with a normal view of the scene. The STT was successful in guiding the GT group to adopt an expert-like gaze strategy (displaying more target-locking fixations). Adopting expert gaze strategies led to an improvement in performance for the GT group, which outperformed the DL group in both retention and stress tests (faster completion time and fewer errors). The STT is a practical and cost-effective training interface that automatically promotes an optimal gaze strategy. Trainees who are trained to adopt the efficient target-locking gaze strategy of experts gain a performance advantage over trainees left to discover their own strategies for task completion. Copyright © 2012 Mosby, Inc. All rights reserved.

  16. Perceptual support promotes strategy generation: Evidence from equation solving.

    PubMed

    Alibali, Martha W; Crooks, Noelle M; McNeil, Nicole M

    2017-08-30

    Over time, children shift from using less optimal strategies for solving mathematics problems to using better ones. But why do children generate new strategies? We argue that they do so when they begin to encode problems more accurately; therefore, we hypothesized that perceptual support for correct encoding would foster strategy generation. Fourth-grade students solved mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __) in a pre-test. They were then randomly assigned to one of three perceptual support conditions or to a Control condition. Participants in all conditions completed three mathematical equivalence problems with feedback about correctness. Participants in the experimental conditions received perceptual support (i.e., highlighting in red ink) for accurately encoding the equal sign, the right side of the equation, or the numbers that could be added to obtain the correct solution. Following this intervention, participants completed a problem-solving post-test. Among participants who solved the problems incorrectly at pre-test, those who received perceptual support for correctly encoding the equal sign were more likely to generate new, correct strategies for solving the problems than were those who received feedback only. Thus, perceptual support for accurate encoding of a key problem feature promoted generation of new, correct strategies. Statement of Contribution What is already known on this subject? With age and experience, children shift to using more effective strategies for solving math problems. Problem encoding also improves with age and experience. What the present study adds? Support for encoding the equal sign led children to generate correct strategies for solving equations. Improvements in problem encoding are one source of new strategies. © 2017 The British Psychological Society.

  17. Optimal Dynamic Advertising Strategy Under Age-Specific Market Segmentation

    NASA Astrophysics Data System (ADS)

    Krastev, Vladimir

    2011-12-01

    We consider the model proposed by Faggian and Grosset for determining the advertising efforts and goodwill in the long run of a company under age segmentation of consumers. Reducing this model to optimal control sub problems we find the optimal advertising strategy and goodwill.

  18. Piston Bowl Optimization for RCCI Combustion in a Light-Duty Multi-Cylinder Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson, Reed M; Curran, Scott; Wagner, Robert M

    2012-01-01

    Reactivity Controlled Compression Ignition (RCCI) is an engine combustion strategy that that produces low NO{sub x} and PM emissions with high thermal efficiency. Previous RCCI research has been investigated in single-cylinder heavy-duty engines. The current study investigates RCCI operation in a light-duty multi-cylinder engine at 3 operating points. These operating points were chosen to cover a range of conditions seen in the US EPA light-duty FTP test. The operating points were chosen by the Ad Hoc working group to simulate operation in the FTP test. The fueling strategy for the engine experiments consisted of in-cylinder fuel blending using port fuel-injectionmore » (PFI) of gasoline and early-cycle, direct-injection (DI) of diesel fuel. At these 3 points, the stock engine configuration is compared to operation with both the original equipment manufacturer (OEM) and custom machined pistons designed for RCCI operation. The pistons were designed with assistance from the KIVA 3V computational fluid dynamics (CFD) code. By using a genetic algorithm optimization, in conjunction with KIVA, the piston bowl profile was optimized for dedicated RCCI operation to reduce unburned fuel emissions and piston bowl surface area. By reducing these parameters, the thermal efficiency of the engine was improved while maintaining low NOx and PM emissions. Results show that with the new piston bowl profile and an optimized injection schedule, RCCI brake thermal efficiency was increased from 37%, with the stock EURO IV configuration, to 40% at the 2,600 rev/min, 6.9 bar BMEP condition, and NOx and PM emissions targets were met without the need for exhaust after-treatment.« less

  19. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    NASA Astrophysics Data System (ADS)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  20. Towards a systematic nationwide screening strategy for MODY.

    PubMed

    Shields, Beverley; Colclough, Kevin

    2017-04-01

    MODY is an early-onset monogenic form of diabetes. Correctly identifying MODY is of considerable importance as diagnosing the specific genetic subtype can inform the optimal treatment, with many patients being able to discontinue unnecessary insulin treatment. Diagnostic molecular genetic testing to confirm MODY is expensive, so screening strategies are required to identify the most appropriate patients for testing. In this issue of Diabetologia, Johansson and colleagues (DOI 10.1007/s00125-016-4167-1 ) describe a nationwide systematic screening approach to identify individuals with MODY in the paediatric age range. They focused testing on patients negative for both GAD and islet antigen 2 (IA-2) islet autoantibodies, thereby ruling out those with markers of type 1 diabetes, the most common form of diabetes in this age group. This commentary discusses the advantages and limitations of the approach, and the caution required when interpreting variants of uncertain pathogenicity identified from testing whole populations rather than targeting only patients with a strong MODY phenotype.

  1. Optimal growth trajectories with finite carrying capacity.

    PubMed

    Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  2. Optimal growth trajectories with finite carrying capacity

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.

    2016-08-01

    We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.

  3. Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir.

    PubMed

    Riley, James L; Montaner, Luis J

    2017-03-15

    Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  4. Towards a theory of tiered testing.

    PubMed

    Hansson, Sven Ove; Rudén, Christina

    2007-06-01

    Tiered testing is an essential part of any resource-efficient strategy for the toxicity testing of a large number of chemicals, which is required for instance in the risk management of general (industrial) chemicals, In spite of this, no general theory seems to be available for the combination of single tests into efficient tiered testing systems. A first outline of such a theory is developed. It is argued that chemical, toxicological, and decision-theoretical knowledge should be combined in the construction of such a theory. A decision-theoretical approach for the optimization of test systems is introduced. It is based on expected utility maximization with simplified assumptions covering factual and value-related information that is usually missing in the development of test systems.

  5. Optimal charge control strategies for stationary photovoltaic battery systems

    NASA Astrophysics Data System (ADS)

    Li, Jiahao; Danzer, Michael A.

    2014-07-01

    Battery systems coupled to photovoltaic (PV) modules for example fulfill one major function: they locally decouple PV generation and consumption of electrical power leading to two major effects. First, they reduce the grid load, especially at peak times and therewith reduce the necessity of a network expansion. And second, they increase the self-consumption in households and therewith help to reduce energy expenses. For the management of PV batteries charge control strategies need to be developed to reach the goals of both the distribution system operators and the local power producer. In this work optimal control strategies regarding various optimization goals are developed on the basis of the predicted household loads and PV generation profiles using the method of dynamic programming. The resulting charge curves are compared and essential differences discussed. Finally, a multi-objective optimization shows that charge control strategies can be derived that take all optimization goals into account.

  6. Energy Performance and Optimal Control of Air-conditioned Buildings Integrated with Phase Change Materials

    NASA Astrophysics Data System (ADS)

    Zhu, Na

    This thesis presents an overview of the previous research work on dynamic characteristics and energy performance of buildings due to the integration of PCMs. The research work on dynamic characteristics and energy performance of buildings using PCMs both with and without air-conditioning is reviewed. Since the particular interest in using PCMs for free cooling and peak load shifting, specific research efforts on both subjects are reviewed separately. A simplified physical dynamic model of building structures integrated with SSPCM (shaped-stabilized phase change material) is developed and validated in this study. The simplified physical model represents the wall by 3 resistances and 2 capacitances and the PCM layer by 4 resistances and 2 capacitances respectively while the key issue is the parameter identification of the model. This thesis also presents the studies on the thermodynamic characteristics of buildings enhanced by PCM and on the investigation of the impacts of PCM on the building cooling load and peak cooling demand at different climates and seasons as well as the optimal operation and control strategies to reduce the energy consumption and energy cost by reducing the air-conditioning energy consumption and peak load. An office building floor with typical variable air volume (VAV) air-conditioning system is used and simulated as the reference building in the comparison study. The envelopes of the studied building are further enhanced by integrating the PCM layers. The building system is tested in two selected cities of typical climates in China including Hong Kong and Beijing. The cold charge and discharge processes, the operation and control strategies of night ventilation and the air temperature set-point reset strategy for minimizing the energy consumption and electricity cost are studied. This thesis presents the simulation test platform, the test results on the cold storage and discharge processes, the air-conditioning energy consumption and demand reduction potentials in typical air-conditioning seasons in typical China cites as well as the impacts of operation and control strategies.

  7. Long range personalized cancer treatment strategies incorporating evolutionary dynamics.

    PubMed

    Yeang, Chen-Hsiang; Beckman, Robert A

    2016-10-22

    Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.

  8. Experimental validation of a new heterogeneous mechanical test design

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  9. Stochastic optimization algorithms for barrier dividend strategies

    NASA Astrophysics Data System (ADS)

    Yin, G.; Song, Q. S.; Yang, H.

    2009-01-01

    This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.

  10. Speed and convergence properties of gradient algorithms for optimization of IMRT.

    PubMed

    Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe

    2004-05-01

    Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.

  11. An improved grey wolf optimizer algorithm for the inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui

    2018-05-01

    The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.

  12. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  13. Novel identification strategy for ground coffee adulteration based on UPLC-HRMS oligosaccharide profiling.

    PubMed

    Cai, Tie; Ting, Hu; Jin-Lan, Zhang

    2016-01-01

    Coffee is one of the most common and most valuable beverages. According to International Coffee Organization (ICO) reports, the adulteration of coffee for financial reasons is regarded as the most serious threat to the sustainable development of the coffee market. In this work, a novel strategy for adulteration identification in ground coffee was developed based on UPLC-HRMS oligosaccharide profiling. Along with integrated statistical analysis, 17 oligosaccharide composition were identified as markers for the identification of soybeans and rice in ground coffee. This strategy, validated by manual mixtures, optimized both the reliability and authority of adulteration identification. Rice and soybean adulterants present in ground coffee in amounts as low as 5% were identified and evaluated. Some commercial ground coffees were also successfully tested using this strategy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Evaluation of an optimal fill strategy to biodegrade inhibitory wastewater using an industrial prototype discontinuous reactor.

    PubMed

    Buitrón, G; Moreno-Andrade, I; Linares-García, J A; Pérez, J; Betancur, M J; Moreno, J A

    2007-01-01

    This work presents the results and discussions of the application of an optimally controlled influent flow rate strategy to biodegrade, in a discontinuous reactor, a synthetic wastewater constituted by 4-chlorophenol. An aerobic automated discontinuous reactor system of 1.3 m3, with a useful volume of 0.75 m3 and an exchange volume of 60% was used. As part of the control strategy influent is fed into the reactor in such a way as to obtain the maximal degradation rate avoiding inhibition of microorganisms. Such an optimal strategy was able to manage increments of 4-chlorophenol concentrations in the influent between 250 and 1000 mg/L. it was shown that the optimally controlled influent flow rate strategy brings savings in reaction time and flexibility in treating high concentrations of an influent with toxic characteristics.

  15. Revenue Share between Layers and Investment Incentive for ISP in the Internet Market

    NASA Astrophysics Data System (ADS)

    Unno, Masaru; Xu, Hua

    In this paper, we consider a revenue-sharing and network investment problem between an Internet service provider (ISP) and a content provider (CP) by applying the dynamic agency theory. We formulate the problem as the principal-agent problem where the ISP is the principal and the CP is the agent. The principal-agent problem is transformed to a stochastic optimal control problem in which the objectives of ISP are to find an optimal revenue-sharing strategy and a network investment strategy, and to advise an incentive compatible effort level to the CP. The sufficient conditions for the existence of the optimal revenue-sharing strategy, the optimal investment strategy and the incentive compatible effort to the CP are obtained. A numerical example is solved to show the existence of such strategies. The practical implications of the results obtained in the paper will also be discussed.

  16. A simple and fast heuristic for protein structure comparison.

    PubMed

    Pelta, David A; González, Juan R; Moreno Vega, Marcos

    2008-03-25

    Protein structure comparison is a key problem in bioinformatics. There exist several methods for doing protein comparison, being the solution of the Maximum Contact Map Overlap problem (MAX-CMO) one of the alternatives available. Although this problem may be solved using exact algorithms, researchers require approximate algorithms that obtain good quality solutions using less computational resources than the formers. We propose a variable neighborhood search metaheuristic for solving MAX-CMO. We analyze this strategy in two aspects: 1) from an optimization point of view the strategy is tested on two different datasets, obtaining an error of 3.5%(over 2702 pairs) and 1.7% (over 161 pairs) with respect to optimal values; thus leading to high accurate solutions in a simpler and less expensive way than exact algorithms; 2) in terms of protein structure classification, we conduct experiments on three datasets and show that is feasible to detect structural similarities at SCOP's family and CATH's architecture levels using normalized overlap values. Some limitations and the role of normalization are outlined for doing classification at SCOP's fold level. We designed, implemented and tested.a new tool for solving MAX-CMO, based on a well-known metaheuristic technique. The good balance between solution's quality and computational effort makes it a valuable tool. Moreover, to the best of our knowledge, this is the first time the MAX-CMO measure is tested at SCOP's fold and CATH's architecture levels with encouraging results.

  17. Common aero vehicle autonomous reentry trajectory optimization satisfying waypoint and no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Jorris, Timothy R.

    2007-12-01

    To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.

  18. Optimizing social participation in community-dwelling older adults through the use of behavioral coping strategies.

    PubMed

    Provencher, Véronique; Desrosiers, Johanne; Demers, Louise; Carmichael, Pierre-Hugues

    2016-01-01

    This study aimed to (1) determine the categories of behavioral coping strategies most strongly correlated with optimal seniors' social participation in different activity and role domains and (2) identify the demographic, health and environmental factors associated with the use of these coping strategies optimizing social participation. The sample consisted of 350 randomly recruited community-dwelling older adults (≥65 years). Coping strategies and social participation were measured, respectively, using the Inventory of Coping Strategies Used by the Elderly and Assessment of Life Habits questionnaires. Information about demographic, health and environmental factors was also collected during the interview. Regression analyses showed a strong relationship between the use of cooking- and transportation-related coping strategies and optimal participation in the domains of nutrition and community life, respectively. Older age and living alone were associated with increased use of cooking-related strategies, while good self-rated health and not living in a seniors' residence were correlated with greater use of transportation-related strategies. Our study helped to identify useful behavioral coping strategies that should be incorporated in disability prevention programs designed to promote community-dwelling seniors' social participation. However, the appropriateness of these strategies depends on whether they are used in relevant contexts and tailored to specific needs. Our results support the relevance of including behavioral coping strategies related to cooking and transportation in disability prevention programs designed to promote community-dwelling seniors' social participation in the domains of nutrition and community life, respectively. Older age and living alone were associated with increased use of cooking-related strategies, while good self-rated health and not living in a seniors' residence were correlated with greater use of transportation-related strategies. These factors should be considered in order to optimize implementation of these useful strategies in disability prevention programs. The appropriateness of these selected strategies depends on whether they are used in relevant contexts and tailored to specific needs.

  19. Cross-sectoral optimization and visualization of transformation processes in urban water infrastructures in rural areas.

    PubMed

    Baron, S; Kaufmann Alves, I; Schmitt, T G; Schöffel, S; Schwank, J

    2015-01-01

    Predicted demographic, climatic and socio-economic changes will require adaptations of existing water supply and wastewater disposal systems. Especially in rural areas, these new challenges will affect the functionality of the present systems. This paper presents a joint interdisciplinary research project with the objective of developing an innovative software-based optimization and decision support system for the implementation of long-term transformations of existing infrastructures of water supply, wastewater and energy. The concept of the decision support and optimization tool is described and visualization methods for the presentation of results are illustrated. The model is tested in a rural case study region in the Southwest of Germany. A transformation strategy for a decentralized wastewater treatment concept and its visualization are presented for a model village.

  20. Characterization of pulse amplitude and pulse rate modulation for a human vestibular implant during acute electrical stimulation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. A. K.; DiGiovanna, J.; Cavuscens, S.; Ranieri, M.; Guinand, N.; van de Berg, R.; Carpaneto, J.; Kingma, H.; Guyot, J.-P.; Micera, S.; Perez Fornos, A.

    2016-08-01

    Objective. The vestibular system provides essential information about balance and spatial orientation via the brain to other sensory and motor systems. Bilateral vestibular loss significantly reduces quality of life, but vestibular implants (VIs) have demonstrated potential to restore lost function. However, optimal electrical stimulation strategies have not yet been identified in patients. In this study, we compared the two most common strategies, pulse amplitude modulation (PAM) and pulse rate modulation (PRM), in patients. Approach. Four subjects with a modified cochlear implant including electrodes targeting the peripheral vestibular nerve branches were tested. Charge-equivalent PAM and PRM were applied after adaptation to baseline stimulation. Vestibulo-ocular reflex eye movement responses were recorded to evaluate stimulation efficacy during acute clinical testing sessions. Main results. PAM evoked larger amplitude eye movement responses than PRM. Eye movement response axes for lateral canal stimulation were marginally better aligned with PRM than with PAM. A neural network model was developed for the tested stimulation strategies to provide insights on possible neural mechanisms. This model suggested that PAM would consistently cause a larger ensemble firing rate of neurons and thus larger responses than PRM. Significance. Due to the larger magnitude of eye movement responses, our findings strongly suggest PAM as the preferred strategy for initial VI modulation.

  1. On scheduling task systems with variable service times

    NASA Astrophysics Data System (ADS)

    Maset, Richard G.; Banawan, Sayed A.

    1993-08-01

    Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.

  2. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  3. Evaluating data worth for ground-water management under uncertainty

    USGS Publications Warehouse

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.

  4. Application of the advanced engineering environment for optimization energy consumption in designed vehicles

    NASA Astrophysics Data System (ADS)

    Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.

    2016-08-01

    Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.

  5. Multiobjective optimization of low impact development stormwater controls

    NASA Astrophysics Data System (ADS)

    Eckart, Kyle; McPhee, Zach; Bolisetti, Tirupati

    2018-07-01

    Green infrastructure such as Low Impact Development (LID) controls are being employed to manage the urban stormwater and restore the predevelopment hydrological conditions besides improving the stormwater runoff water quality. Since runoff generation and infiltration processes are nonlinear, there is a need for identifying optimal combination of LID controls. A coupled optimization-simulation model was developed by linking the U.S. EPA Stormwater Management Model (SWMM) to the Borg Multiobjective Evolutionary Algorithm (Borg MOEA). The coupled model is capable of performing multiobjective optimization which uses SWMM simulations as a tool to evaluate potential solutions to the optimization problem. The optimization-simulation tool was used to evaluate low impact development (LID) stormwater controls. A SWMM model was developed, calibrated, and validated for a sewershed in Windsor, Ontario and LID stormwater controls were tested for three different return periods. LID implementation strategies were optimized using the optimization-simulation model for five different implementation scenarios for each of the three storm events with the objectives of minimizing peak flow in the stormsewers, reducing total runoff, and minimizing cost. For the sewershed in Windsor, Ontario, the peak run off and total volume of the runoff were found to reduce by 13% and 29%, respectively.

  6. Optimal Trajectories and Control Strategies for the Helicopter in One-Engine-Inoperative Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Zhao, Yi-Yuan; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    Engine failure represents a major safety concern to helicopter operations, especially in the critical flight phases of takeoff and landing from/to small, confined areas. As a result, the JAA and FAA both certificate a transport helicopter as either Category-A or Category-B according to the ability to continue its operations following engine failures. A Category-B helicopter must be able to land safely in the event of one or all engine failures. There is no requirement, however, for continued flight capability. In contrast, Category-A certification, which applies to multi-engine transport helicopters with independent engine systems, requires that they continue the flight with one engine inoperative (OEI). These stringent requirements, while permitting its operations from rooftops and oil rigs and flight to areas where no emergency landing sites are available, restrict the payload of a Category-A transport helicopter to a value safe for continued flight as well as for landing with one engine inoperative. The current certification process involves extensive flight tests, which are potentially dangerous, costly, and time consuming. These tests require the pilot to simulate engine failures at increasingly critical conditions, Flight manuals based on these tests tend to provide very conservative recommendations with regard to maximum takeoff weight or required runway length. There are very few theoretical studies on this subject to identify the fundamental parameters and tradeoff factors involved. Furthermore, a capability for real-time generation of OEI optimal trajectories is very desirable for providing timely cockpit display guidance to assist the pilot in reducing his workload and to increase safety in a consistent and reliable manner. A joint research program involving NASA Ames Research Center, the FAA, and the University of Minnesota is being conducted to determine OEI optimal control strategies and the associated optimal,trajectories for continued takeoff (CTO), rejected takeoff (RTO), balked landing (BL), and continued landing (CL) for a twin engine helicopter in both VTOL and STOL terminal-area operations. This proposed paper will present the problem formulation, the optimal control solution methods, and the key results of the trajectory optimization studies for both STOL and VTOL OEI operations. In addition, new results concerning the recently developed methodology, which enable a real-time generation of optimal OEI trajectories, will be presented in the paper. This new real-time capability was developed to support the second piloted simulator investigation on cockpit displays for Category-A operations being scheduled for the NASA Ames Vertical Motion Simulator in June-August of 1995. The first VMS simulation was conducted in 1994 and reported.

  7. Optimal environmental management strategy and implementation for groundwater contamination prevention and restoration.

    PubMed

    Wang, Mingyu

    2006-04-01

    An innovative management strategy is proposed for optimized and integrated environmental management for regional or national groundwater contamination prevention and restoration allied with consideration of sustainable development. This management strategy accounts for availability of limited resources, human health and ecological risks from groundwater contamination, costs for groundwater protection measures, beneficial uses and values from groundwater protection, and sustainable development. Six different categories of costs are identified with regard to groundwater prevention and restoration. In addition, different environmental impacts from groundwater contamination including human health and ecological risks are individually taken into account. System optimization principles are implemented to accomplish decision-makings on the optimal resources allocations of the available resources or budgets to different existing contaminated sites and projected contamination sites for a maximal risk reduction. Established management constraints such as budget limitations under different categories of costs are satisfied at the optimal solution. A stepwise optimization process is proposed in which the first step is to select optimally a limited number of sites where remediation or prevention measures will be taken, from all the existing contaminated and projected contamination sites, based on a total regionally or nationally available budget in a certain time frame such as 10 years. Then, several optimization steps determined year-by-year optimal distributions of the available yearly budgets for those selected sites. A hypothetical case study is presented to demonstrate a practical implementation of the management strategy. Several issues pertaining to groundwater contamination exposure and risk assessments and remediation cost evaluations are briefly discussed for adequately understanding implementations of the management strategy.

  8. Validation of optimization strategies using the linear structured production chains

    NASA Astrophysics Data System (ADS)

    Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2017-06-01

    Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.

  9. Modeling joint restoration strategies for interdependent infrastructure systems

    PubMed Central

    Simonovic, Slobodan P.

    2018-01-01

    Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300

  10. Two-phase strategy of neural control for planar reaching movements: II--relation to spatiotemporal characteristics of movement trajectory.

    PubMed

    Rand, Miya K; Shimansky, Yury P

    2013-09-01

    In the companion paper utilizing a quantitative model of optimal motor coordination (Part I, Rand and Shimansky, in Exp Brain Res 225:55-73, 2013), we examined coordination between X and Y movement directions (XYC) during reaching movements performed under three prescribed speeds, two movement amplitudes, and two target sizes. The obtained results indicated that the central nervous system (CNS) utilizes a two-phase strategy, where the initial and the final phases correspond to lower and higher precision of information processing, respectively, for controlling goal-directed reach-type movements to optimize the total cost of task performance including the cost of neural computations. The present study investigates how two different well-known concepts used for describing movement performance relate to the concepts of optimal XYC and two-phase control strategy. First, it is examined to what extent XYC is equivalent to movement trajectory straightness. The data analysis results show that the variability, the movement trajectory's deviation from the straight line, increases with an increase in prescribed movement speed. In contrast, the dependence of XYC strength on movement speed is opposite (in total agreement with an assumption of task performance optimality), suggesting that XYC is a feature of much higher level of generality than trajectory straightness. Second, it is tested how well the ballistic and the corrective components described in the traditional concept of two-component model of movement performance match with the initial and the final phase of the two-phase control strategy, respectively. In fast reaching movements, the percentage of trials with secondary corrective submovement was smaller under larger-target shorter-distance conditions. In slower reaching movements, meaningful parsing was impossible due to massive fluctuations in the kinematic profile throughout the movement. Thus, the parsing points determined by the conventional submovement analysis did not consistently reflect separation between the ballistic and error-corrective components. In contrast to the traditional concept of two-component movement performance, the concept of two-phase control strategy is applicable to a wide variety of experimental conditions.

  11. Optimizing the Nutritional Support of Adult Patients in the Setting of Cirrhosis.

    PubMed

    Perumpail, Brandon J; Li, Andrew A; Cholankeril, George; Kumari, Radhika; Ahmed, Aijaz

    2017-10-13

    The aim of this work is to develop a pragmatic approach in the assessment and management strategies of patients with cirrhosis in order to optimize the outcomes in this patient population. A systematic review of literature was conducted through 8 July 2017 on the PubMed Database looking for key terms, such as malnutrition, nutrition, assessment, treatment, and cirrhosis. Articles and studies looking at associations between nutrition and cirrhosis were reviewed. An assessment of malnutrition should be conducted in two stages: the first, to identify patients at risk for malnutrition based on the severity of liver disease, and the second, to perform a complete multidisciplinary nutritional evaluation of these patients. Optimal management of malnutrition should focus on meeting recommended daily goals for caloric intake and inclusion of various nutrients in the diet. The nutritional goals should be pursued by encouraging and increasing oral intake or using other measures, such as oral supplementation, enteral nutrition, or parenteral nutrition. Although these strategies to improve nutritional support have been well established, current literature on the topic is limited in scope. Further research should be implemented to test if this enhanced approach is effective.

  12. The LHCb Grid Simulation: Proof of Concept

    NASA Astrophysics Data System (ADS)

    Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.

    2017-10-01

    The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.

  13. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2014-01-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper. PMID:26257939

  14. A performance-oriented power transformer design methodology using multi-objective evolutionary optimization.

    PubMed

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2015-05-01

    Transformers are regarded as crucial components in power systems. Due to market globalization, power transformer manufacturers are facing an increasingly competitive environment that mandates the adoption of design strategies yielding better performance at lower costs. In this paper, a power transformer design methodology using multi-objective evolutionary optimization is proposed. Using this methodology, which is tailored to be target performance design-oriented, quick rough estimation of transformer design specifics may be inferred. Testing of the suggested approach revealed significant qualitative and quantitative match with measured design and performance values. Details of the proposed methodology as well as sample design results are reported in the paper.

  15. Costo-Efectividad de la Proteína C Reactiva, Procalcitonina y Escala de Rochester: Tres Estrategias Diagnosticas para la Identificación de Infección Bacteriana Severa en Lactantes Febriles sin Foco.

    PubMed

    Antonio Buendía, Jefferson; Colantonio, Lisandro

    2013-12-01

    The optimal practice management of highly febrile 1- to 3-month-old children without a focal source has been controversial. The release of a conjugate pneumococcal vaccine may reduce the rate of occult bacteremia and alter the utility of empiric testing. The objective of this study was to determine the cost-effectiveness of 3 different screening strategies of Serious Bacterial Infections (SBI) in Children Presenting with Fever without Source in Argentina. Cost-effectiveness (CE) analysis was performed to compare the strategies of procalcitonin, C reactive protein and Rochester criteria. A hypothetical cohort of 10 000 children who were 1 to 3 months of age and had a fever of >39°C and no source of infection was modeled for each strategy. Our main outcome measure was incremental CE ratios. C reactive protein result in US$ 937 per correctly diagnosed cases of SBI. The additional cost per additional correct diagnosis using procalcitonin versus C reactive protein was U$6127 while Rochester criteria resulted dominated. C reactive protein is the strategy more cost-effective to detect SBI in children with Fever without Source in Argentina. Due to low proportion of correctly diagnosed cases (< 80%) of three tests in the literature and our study, however; an individualized approach for children with fever is still necessary to optimize diagnostic investigations and treatment in the different emergency care settings. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  16. A dynamic multiarmed bandit-gene expression programming hyper-heuristic for combinatorial optimization problems.

    PubMed

    Sabar, Nasser R; Ayob, Masri; Kendall, Graham; Qu, Rong

    2015-02-01

    Hyper-heuristics are search methodologies that aim to provide high-quality solutions across a wide variety of problem domains, rather than developing tailor-made methodologies for each problem instance/domain. A traditional hyper-heuristic framework has two levels, namely, the high level strategy (heuristic selection mechanism and the acceptance criterion) and low level heuristics (a set of problem specific heuristics). Due to the different landscape structures of different problem instances, the high level strategy plays an important role in the design of a hyper-heuristic framework. In this paper, we propose a new high level strategy for a hyper-heuristic framework. The proposed high-level strategy utilizes a dynamic multiarmed bandit-extreme value-based reward as an online heuristic selection mechanism to select the appropriate heuristic to be applied at each iteration. In addition, we propose a gene expression programming framework to automatically generate the acceptance criterion for each problem instance, instead of using human-designed criteria. Two well-known, and very different, combinatorial optimization problems, one static (exam timetabling) and one dynamic (dynamic vehicle routing) are used to demonstrate the generality of the proposed framework. Compared with state-of-the-art hyper-heuristics and other bespoke methods, empirical results demonstrate that the proposed framework is able to generalize well across both domains. We obtain competitive, if not better results, when compared to the best known results obtained from other methods that have been presented in the scientific literature. We also compare our approach against the recently released hyper-heuristic competition test suite. We again demonstrate the generality of our approach when we compare against other methods that have utilized the same six benchmark datasets from this test suite.

  17. A Comparison of Two Sampling Strategies to Assess Discomycete Diversity in Wet Tropical Forests

    Treesearch

    SHARON A. CANTRELL

    2004-01-01

    Most of the fungal diversity studies that have used a systematic collecting scheme have not included the discomycetes, so optimal sampling methods are not available for this group. In this study, I tested two sampling methods at each sites in the Caribbean National Forest, Puerto Rico and Ebano Verde Reserve, Dominican Republic. For a plot-based sampling method, 10 ×...

  18. Syllabic Strategy as Opposed to Coda Optimization in the Segmentation of Spanish Letter-Strings Using Word Spotting

    ERIC Educational Resources Information Center

    Álvarez, Carlos J.; Taft, Marcus; Hernández-Cabrera, Juan A.

    2017-01-01

    A word-spotting task is used in Spanish to test the way in which polysyllabic letter-strings are parsed in this language. Monosyllabic words (e.g., "bar") embedded at the beginning of a pseudoword were immediately followed by either a coda-forming consonant (e.g., "barto") or a vowel (e.g., "baros"). In the former…

  19. Costs, effectiveness, and workload impact of management strategies for women with an adnexal mass.

    PubMed

    Havrilesky, Laura J; Dinan, Michaela; Sfakianos, Gregory P; Curtis, Lesley H; Barnett, Jason C; Van Gorp, Toon; Myers, Evan R

    2015-01-01

    We compared the estimated clinical outcomes, costs, and physician workload resulting from available strategies for deciding which women with an adnexal mass should be referred to a gynecologic oncologist. We used a microsimulation model to compare five referral strategies: 1) American Congress of Obstetricians and Gynecologists (ACOG) guidelines, 2) Multivariate Index Assay (MIA) algorithm, 3) Risk of Malignancy Algorithm (ROMA), 4) CA125 alone with lowered cutoff values to prioritize test sensitivity over specificity, 5) referral of all women (Refer All). Test characteristics and relative survival were obtained from the literature and data from a biomarker validation study. Medical costs were estimated using Medicare reimbursements. Travel costs were estimated using discharge data from Surveillance, Epidemiology and End Results-Medicare and State Inpatient Databases. Analyses were performed separately for pre- and postmenopausal women (60 000 "subjects" in each), repeated 10 000 times. Refer All was cost-effective compared with less expensive strategies in both postmenopausal (incremental cost-effectiveness ratio [ICER] $9423/year of life saved (LYS) compared with CA125) and premenopausal women (ICER $10 644/YLS compared with CA125), but would result in an additional 73 cases/year/subspecialist. MIA was more expensive and less effective than Refer All in pre- and postmenopausal women. If Refer All is not a viable option, CA125 is an optimal strategy in postmenopausal women. Referral of all women to a subspecialist is an efficient strategy for managing women with adnexal masses requiring surgery, assuming sufficient capacity for additional surgical volume. If a test-based triage strategy is needed, CA125 with lowered cutoff values is a cost-effective strategy. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Meta-Analysis and Cost Comparison of Empirical versus Pre-Emptive Antifungal Strategies in Hematologic Malignancy Patients with High-Risk Febrile Neutropenia.

    PubMed

    Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia

    2015-01-01

    Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We demonstrate a state of economic equipoise between empirical and diagnostic-directed pre-emptive antifungal treatment strategies, influenced by small changes in cost of antifungal therapy and diagnostic testing, in the current literature. This work emphasizes the need for optimization of existing fungal diagnostic strategies, development of more efficient diagnostic strategies, and less toxic and more cost-effective antifungals.

  1. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  2. An EGO-like optimization framework for sensor placement optimization in modal analysis

    NASA Astrophysics Data System (ADS)

    Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel

    2018-07-01

    In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.

  3. Application of Multi-Objective Human Learning Optimization Method to Solve AC/DC Multi-Objective Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Yan, Zheng; He, Guangyu

    2016-06-01

    This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.

  4. Simplex optimization of headspace factors for headspace gas chromatography determination of residual solvents in pharmaceutical products.

    PubMed

    Grodowska, Katarzyna; Parczewski, Andrzej

    2013-01-01

    The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).

  5. Gravitational wave probes of parity violation in compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Alexander, Stephon H.; Yunes, Nicolás

    2018-03-01

    Is gravity parity violating? Given the recent observations of gravitational waves from coalescing compact binaries, we develop a strategy to find an answer with current and future detectors. We identify the key signatures of parity violation in gravitational waves: amplitude birefringence in their propagation and a modified chirping rate in their generation. We then determine the optimal binaries to test the existence of parity violation in gravity, and prioritize the research in modeling that will be required to carry out such tests before detectors reach their design sensitivity.

  6. Optimal Trajectories Generation in Robotic Fiber Placement Systems

    NASA Astrophysics Data System (ADS)

    Gao, Jiuchun; Pashkevich, Anatol; Caro, Stéphane

    2017-06-01

    The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example.

  7. Effects of Conjugate Gradient Methods and Step-Length Formulas on the Multiscale Full Waveform Inversion in Time Domain: Numerical Experiments

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing

    2017-05-01

    We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the data are contaminated by noise, the objective function values of the Direct and Interp are oscillating at the beginning of the inversion, whereas that of the Search decreases consistently.

  8. Energy optimization for upstream data transfer in 802.15.4 beacon-enabled star formulation

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Krishnamachari, Bhaskar

    2008-08-01

    Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.

  9. Integrated testing strategy (ITS) for bioaccumulation assessment under REACH.

    PubMed

    Lombardo, Anna; Roncaglioni, Alessandra; Benfentati, Emilio; Nendza, Monika; Segner, Helmut; Fernández, Alberto; Kühne, Ralph; Franco, Antonio; Pauné, Eduard; Schüürmann, Gerrit

    2014-08-01

    REACH (registration, evaluation, authorisation and restriction of chemicals) regulation requires that all the chemicals produced or imported in Europe above 1 tonne/year are registered. To register a chemical, physicochemical, toxicological and ecotoxicological information needs to be reported in a dossier. REACH promotes the use of alternative methods to replace, refine and reduce the use of animal (eco)toxicity testing. Within the EU OSIRIS project, integrated testing strategies (ITSs) have been developed for the rational use of non-animal testing approaches in chemical hazard assessment. Here we present an ITS for evaluating the bioaccumulation potential of organic chemicals. The scheme includes the use of all available data (also the non-optimal ones), waiving schemes, analysis of physicochemical properties related to the end point and alternative methods (both in silico and in vitro). In vivo methods are used only as last resort. Using the ITS, in vivo testing could be waived for about 67% of the examined compounds, but bioaccumulation potential could be estimated on the basis of non-animal methods. The presented ITS is freely available through a web tool. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. One or two serological assay testing strategy for diagnosis of HBV and HCV infection? The use of predictive modelling.

    PubMed

    Parry, John V; Easterbrook, Philippa; Sands, Anita R

    2017-11-01

    Initial serological testing for chronic hepatitis B virus (HBV) and hepatitis C virus (HCV) infection is conducted using either rapid diagnostic tests (RDT) or laboratory-based enzyme immunoassays (EIA)s for detection of hepatitis B surface antigen (HBsAg) or antibodies to HCV (anti-HCV), typically on serum or plasma specimens and, for certain RDTs, capillary whole blood. WHO recommends the use of standardized testing strategies - defined as a sequence of one or more assays to maximize testing accuracy while simplifying the testing process and ideally minimizing cost. Our objective was to examine the diagnostic outcomes of a one- versus two-assay serological testing strategy. These data were used to inform recommendations in the 2017 WHO Guidelines on hepatitis B and C testing. Few published studies have compared diagnostic outcomes for one-assay versus two-assay serological testing strategies for HBsAg and anti-HCV. Therefore, the principles of Bayesian statistics were used to conduct a modelling exercise to examine the outcomes of a one-assay versus two-assay testing strategy when applied to a hypothetical population of 10,000 individuals. The resulting model examined the diagnostic outcomes (true and false positive diagnoses; true and false negative diagnoses; positive and negative predictive values as a function of prevalence; and total tests required) for both one-assay and two-assay testing strategies. The performance characteristics assumed for assays used within the testing strategies were informed by WHO prequalification assessment findings and systematic reviews for diagnostic accuracy studies. Each of the presumptive testing strategies (one-assay or two-assay) was modelled at varying prevalences of HBsAg (10%, 2% and 0.4%) and of anti-HCV (40%, 10%, 2% and 0.4%), aimed at representing the range of testing populations typically encountered in WHO Member States. When the two-assay testing strategy was considered, the model assumed the independence of the two assays. Modeling demonstrated that applying a single assay (HBsAg or anti-HCV), even with high specificity (99%), may result in considerable numbers of false positive diagnoses and low positive predictive values (PPV), particularly in lower prevalence settings. Even at very low prevalences shifting to a two-assay testing strategy would result in a PPV approaching 1.0. When test sensitivity is high (>99%) false negative reactions are rare at all but the highest prevalences; but a two-test strategy might yield more false negative diagnoses. The order in which the tests are used has no impact on the overall accuracy of a two-assay strategy though it may impact the total number of tests needed to complete the diagnostic strategy, incurring added cost and complexity. HBsAg assays may have a low sensitivity (<90%), and result in large numbers of false negative diagnoses, particularly in high prevalence settings, which would be exacerbated in the two-assay testing strategy. In contrast, most anti-HCV assays have high sensitivity and lead to fewer false negative results, both in the one-assay and two-assay testing strategies. At prevalences ≤2% the number of tests needed using a second assay was nearly always small, at <300 per 10,000 individuals tested, making sustainability of a second assay uncertain in such a setting. A key public health objective of an effective testing strategy is to identify all individuals who would benefit from treatment. Therefore, a strategy that prioritizes a high NPV (minimal false negatives) may be acceptable even if the PPV is suboptimal (some false positives) as the implementation of such a public health programme must also take account of other factors such as costs, feasibility, impact on testing uptake and linkage to care, and consequences of a false-positive test. This rationale informed the development of the WHO Viral Hepatitis Testing Guidelines, with a conditional recommendation for a one-assay serological testing strategy in most testing settings and populations (≥0.4% prevalence in population tested). A one-test strategy results in few failures to diagnose infection and, although it is associated under most assumptions with a sub-optimal PPV, benefits include greater simplicity, easier implementation, lower costs and better feasibility, uptake and linkage to care. Furthermore, prior to antiviral therapy all those diagnosed either HBsAg or anti-HCV positive will require confirmation of viræmia, preventing unnecessary treatment of those who may be false positive on serology. For HBsAg, in low-prevalence settings (≤0.4%), a second recommendation was made to consider a two-assay testing strategy, using a confirmatory neutralization step or a second different HBsAg assay.

  11. Inconsistent Investment and Consumption Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kronborg, Morten Tolver, E-mail: mtk@atp.dk; Steffensen, Mogens, E-mail: mogens@math.ku.dk

    In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bangmore » strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.« less

  12. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    PubMed

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal

    ERIC Educational Resources Information Center

    Steinley, Douglas; Hubert, Lawrence

    2008-01-01

    This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…

  14. Artificial Intelligence-Based Models for the Optimal and Sustainable Use of Groundwater in Coastal Aquifers

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Datta, Bithin

    2011-07-01

    Overexploitation of the coastal aquifers results in saltwater intrusion. Once saltwater intrusion occurs, it involves huge cost and long-term remediation measures to remediate these contaminated aquifers. Hence, it is important to have strategies for the sustainable use of coastal aquifers. This study develops a methodology for the optimal management of saltwater intrusion prone aquifers. A linked simulation-optimization-based management strategy is developed. The methodology uses genetic-programming-based models for simulating the aquifer processes, which is then linked to a multi-objective genetic algorithm to obtain optimal management strategies in terms of groundwater extraction from potential well locations in the aquifer.

  15. Multiswarm comprehensive learning particle swarm optimization for solving multiobjective optimization problems.

    PubMed

    Yu, Xiang; Zhang, Xueqing

    2017-01-01

    Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle's personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run.

  16. Testing the optimal defence hypothesis for two indirect defences: extrafloral nectar and volatile organic compounds

    PubMed Central

    Radhika, Venkatesan; Kost, Christian; Bartram, Stefan; Heil, Martin

    2008-01-01

    Many plants respond to herbivory with an increased production of extrafloral nectar (EFN) and/or volatile organic compounds (VOCs) to attract predatory arthropods as an indirect defensive strategy. In this study, we tested whether these two indirect defences fit the optimal defence hypothesis (ODH), which predicts the within-plant allocation of anti-herbivore defences according to trade-offs between growth and defence. Using jasmonic acid-induced plants of Phaseolus lunatus and Ricinus communis, we tested whether the within-plant distribution pattern of these two indirect defences reflects the fitness value of the respective plant parts. Furthermore, we quantified photosynthetic rates and followed the within-plant transport of assimilates with 13C labelling experiments. EFN secretion and VOC emission were highest in younger leaves. Moreover, the photosynthetic rate increased with leaf age, and pulse-labelling experiments suggested transport of carbon to younger leaves. Our results demonstrate that the ODH can explain the within-plant allocation pattern of both indirect defences studied. PMID:18493790

  17. Evaluation of Different Normalization and Analysis Procedures for Illumina Gene Expression Microarray Data Involving Small Changes

    PubMed Central

    Johnstone, Daniel M.; Riveros, Carlos; Heidari, Moones; Graham, Ross M.; Trinder, Debbie; Berretta, Regina; Olynyk, John K.; Scott, Rodney J.; Moscato, Pablo; Milward, Elizabeth A.

    2013-01-01

    While Illumina microarrays can be used successfully for detecting small gene expression changes due to their high degree of technical replicability, there is little information on how different normalization and differential expression analysis strategies affect outcomes. To evaluate this, we assessed concordance across gene lists generated by applying different combinations of normalization strategy and analytical approach to two Illumina datasets with modest expression changes. In addition to using traditional statistical approaches, we also tested an approach based on combinatorial optimization. We found that the choice of both normalization strategy and analytical approach considerably affected outcomes, in some cases leading to substantial differences in gene lists and subsequent pathway analysis results. Our findings suggest that important biological phenomena may be overlooked when there is a routine practice of using only one approach to investigate all microarray datasets. Analytical artefacts of this kind are likely to be especially relevant for datasets involving small fold changes, where inherent technical variation—if not adequately minimized by effective normalization—may overshadow true biological variation. This report provides some basic guidelines for optimizing outcomes when working with Illumina datasets involving small expression changes. PMID:27605185

  18. Optimization of the genotyping-by-sequencing strategy for population genomic analysis in conifers.

    PubMed

    Pan, Jin; Wang, Baosheng; Pei, Zhi-Yong; Zhao, Wei; Gao, Jie; Mao, Jian-Feng; Wang, Xiao-Ru

    2015-07-01

    Flexibility and low cost make genotyping-by-sequencing (GBS) an ideal tool for population genomic studies of nonmodel species. However, to utilize the potential of the method fully, many parameters affecting library quality and single nucleotide polymorphism (SNP) discovery require optimization, especially for conifer genomes with a high repetitive DNA content. In this study, we explored strategies for effective GBS analysis in pine species. We constructed GBS libraries using HpaII, PstI and EcoRI-MseI digestions with different multiplexing levels and examined the effect of restriction enzymes on library complexity and the impact of sequencing depth and size selection of restriction fragments on sequence coverage bias. We tested and compared UNEAK, Stacks and GATK pipelines for the GBS data, and then developed a reference-free SNP calling strategy for haploid pine genomes. Our GBS procedure proved to be effective in SNP discovery, producing 7000-11 000 and 14 751 SNPs within and among three pine species, respectively, from a PstI library. This investigation provides guidance for the design and analysis of GBS experiments, particularly for organisms for which genomic information is lacking. © 2014 John Wiley & Sons Ltd.

  19. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  20. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  1. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  2. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation.

    PubMed

    Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A

    2016-10-26

    Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by increasing the number of bootstrap draws rather than the number of imputations. With a simple integrated approach, valid confidence intervals for performance estimates can be obtained. When prognostic models are developed on incomplete data, Val-MI represents a valid strategy to obtain estimates of predictive performance measures.

  3. LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Victor W. Wong; Tian Tian; Grant Smedley

    2004-09-30

    This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston/ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and emissions. An iterative process of simulation, experimentation and analysis, are being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston/ring dynamic and friction models have been developed and applied that illustrated the fundamental relationships between design parameters and friction losses. Various low-friction strategies and ring-design concepts have been explored, and engine experiments have been done on a full-scalemore » Waukesha VGF F18 in-line 6 cylinder power generation engine rated at 370 kW at 1800 rpm. Current accomplishments include designing and testing ring-packs using a subtle top-compression-ring profile (skewed barrel design), lowering the tension of the oil-control ring, employing a negative twist to the scraper ring to control oil consumption. Initial test data indicate that piston ring-pack friction was reduced by 35% by lowering the oil-control ring tension alone, which corresponds to a 1.5% improvement in fuel efficiency. Although small in magnitude, this improvement represents a first step towards anticipated aggregate improvements from other strategies. Other ring-pack design strategies to lower friction have been identified, including reduced axial distance between the top two rings, tilted top-ring groove. Some of these configurations have been tested and some await further evaluation. Colorado State University performed the tests and Waukesha Engine Dresser, Inc. provided technical support. Key elements of the continuing work include optimizing the engine piston design, application of surface and material developments in conjunction with improved lubricant properties, system modeling and analysis, and continued technology demonstration in an actual full-sized reciprocating natural-gas engine.« less

  4. Quantum-state comparison and discrimination

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  5. Process development for the mass production of Ehrlichia ruminantium.

    PubMed

    Marcelino, Isabel; Sousa, Marcos F Q; Veríssimo, Célia; Cunha, António E; Carrondo, Manuel J T; Alves, Paula M

    2006-03-06

    This work describes the optimization of a cost-effective process for the production of an inactivated bacterial vaccine against heartwater and the first attempt to produce the causative agent of this disease, the rickettsia Ehrlichia ruminantium (ER), using stirred tanks. In vitro, it is possible to produce ER using cultures of ruminant endothelial cells. Herein, mass production of these cells was optimized for stirring conditions. The effect of inoculum size, microcarrier type, concentration of serum at inoculation time and agitation rate upon maximum cell concentration were evaluated. Several strategies for the scale-up of cell inoculum were also tested. Afterwards, using the optimized parameters for cell growth, ER production in stirred tanks was validated for two ER strains (Gardel and Welgevonden). Critical parameters related with the infection strategy such as serum concentration at infection time, multiplicity and time of infection, and medium refeed strategy were analyzed. The results indicate that it is possible to produce ER in stirred tank bioreactors, under serum-free culture conditions, reaching a 6.5-fold increase in ER production yields. The suitability of this process was validated up to a 2-l scale and a preliminary cost estimation has shown that the stirred tanks are the least expensive culture method. Overall, these results are crucial to define a scaleable and fully controlled process for the production of a heartwater vaccine and open "new avenues" for the production of vaccines against other ehrlichial species, with emerging impact in human and animal health.

  6. Optimization of personalized therapies for anticancer treatment.

    PubMed

    Vazquez, Alexei

    2013-04-12

    As today, there are hundreds of targeted therapies for the treatment of cancer, many of which have companion biomarkers that are in use to inform treatment decisions. If we would consider this whole arsenal of targeted therapies as a treatment option for every patient, very soon we will reach a scenario where each patient is positive for several markers suggesting their treatment with several targeted therapies. Given the documented side effects of anticancer drugs, it is clear that such a strategy is unfeasible. Here, we propose a strategy that optimizes the design of combinatorial therapies to achieve the best response rates with the minimal toxicity. In this methodology markers are assigned to drugs such that we achieve a high overall response rate while using personalized combinations of minimal size. We tested this methodology in an in silico cancer patient cohort, constructed from in vitro data for 714 cell lines and 138 drugs reported by the Sanger Institute. Our analysis indicates that, even in the context of personalized medicine, combinations of three or more drugs are required to achieve high response rates. Furthermore, patient-to-patient variations in pharmacokinetics have a significant impact in the overall response rate. A 10 fold increase in the pharmacokinetics variations resulted in a significant drop the overall response rate. The design of optimal combinatorial therapy for anticancer treatment requires a transition from the one-drug/one-biomarker approach to global strategies that simultaneously assign makers to a catalog of drugs. The methodology reported here provides a framework to achieve this transition.

  7. Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.

    PubMed

    Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David

    2017-01-01

    We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.

  8. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.

  9. The Daniel K. Inouye College of Pharmacy Scripts: Updates on Clostridium difficile Infection: Advances in Laboratory Testing to Aid Diagnosis and Treatment.

    PubMed

    Lteif, Louis

    2017-02-01

    Clostridium difficile remains a major source of nosocomial infections and associated diarrhea. More recently, community-acquired cases are on the rise creating a concern for a serious public health threat. Appropriate infection control precautions as well as prevention and optimal management may help to avoid detrimental outbreaks. A key step is utilizing laboratory testing for quick and accurate diagnosis of potential cases. This overview article describes Clostridium difficile infection control and prevention methods and updates the most recent management strategies including a focus on the utilization and interpretation of laboratory diagnostic testing and appropriate treatment.

  10. Bare-Bones Teaching-Learning-Based Optimization

    PubMed Central

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844

  11. Bare-bones teaching-learning-based optimization.

    PubMed

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.

  12. Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy.

    PubMed

    Guo, Haiying; Xing, Yizhan; Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen; Li, Yuhong

    2018-01-01

    Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells.

  13. Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy

    PubMed Central

    Zhang, Yiming; He, Long; Deng, Fang; Ma, Xiaogen

    2018-01-01

    Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells. PMID:29383288

  14. Treatment strategy for a multidrug-resistant Klebsiella UTI.

    PubMed

    Fleming, Erin; Heil, Emily L; Hynicka, Lauren M

    2014-01-01

    To describe the management strategy for a multidrug-resistant (MDR) Klebsiella urinary tract infection (UTI). A 69-year-old Caucasian woman with a past medical history of recurrent UTIs and a right-lung transplant presented with fever to 101.4°F, chills, malaise, and cloudy, foul-smelling urine for approximately 1 week. She was found to have a MDR Klebsiella UTI that was sensitive to tigecycline and cefepime. To further evaluate the degree of resistance Etest minimum inhibitory concentrations were requested for cefepime, amikacin, meropenem, and ertapenem. The patient received a 14-day course of amikacin, which resulted in resolution of her symptoms. One month later, the patient's UTI symptoms returned. The urine culture again grew MDR Klebsiella, sensitive only to tigecycline. Fosfomycin was initiated and resulted in limited resolution of her symptoms. Colistin was started, however, therapy was discontinued on day 5 secondary to the development of acute kidney injury. Despite the short course of therapy, the patient's symptoms resolved. The case presented lends itself well to numerous discussion items that are important to consider when determining optimal treatment for MDR Gram-negative bacilli (GNBs). Susceptibility testing is an important tool for optimizing antibiotic therapy, however, automated systems may overestimate the susceptibility profile for a MDR GNB. Treatment strategies evaluated to treat MDR GNB, include combination therapy with a carbepenem and synergy using polymyxin. We have described the management strategy for a MDR Klebsiella UTI, the consequences of the initial management strategy, and potential strategies to manage these types of infections in future patients.

  15. Research on a power management system for thermoelectric generators to drive wireless sensors on a spindle unit.

    PubMed

    Li, Sheng; Yao, Xinhua; Fu, Jianzhong

    2014-07-16

    Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle.

  16. Optimization of a yeast RNA interference system for controlling gene expression and enabling rapid metabolic engineering.

    PubMed

    Crook, Nathan C; Schmitz, Alexander C; Alper, Hal S

    2014-05-16

    Reduction of endogenous gene expression is a fundamental operation of metabolic engineering, yet current methods for gene knockdown (i.e., genome editing) remain laborious and slow, especially in yeast. In contrast, RNA interference allows facile and tunable gene knockdown via a simple plasmid transformation step, enabling metabolic engineers to rapidly prototype knockdown strategies in multiple strains before expending significant cost to undertake genome editing. Although RNAi is naturally present in a myriad of eukaryotes, it has only been recently implemented in Saccharomyces cerevisiae as a heterologous pathway and so has not yet been optimized as a metabolic engineering tool. In this study, we elucidate a set of design principles for the construction of hairpin RNA expression cassettes in yeast and implement RNA interference to quickly identify routes for improvement of itaconic acid production in this organism. The approach developed here enables rapid prototyping of knockdown strategies and thus accelerates and reduces the cost of the design-build-test cycle in yeast.

  17. Nonpharmacological, Blood Conservation Techniques for Preventing Neonatal Anemia—Effective and Promising Strategies for Reducing Transfusion

    PubMed Central

    Carroll, Patrick D.; Widness, John A.

    2012-01-01

    The development of anemia after birth in very premature, critically ill newborn infants is a universal well-described phenomenon. Although preventing anemia in this population, along with efforts to establish optimal red blood cell (RBC) transfusion and pharmacologic therapy continue to be actively investigated, the present review focuses exclusively on nonpharmacological approaches to the prevention and treatment of neonatal anemia. We begin with an overview of topics relevant to nonpharmacological techniques. These topics include neonatal and fetoplacental hemoglobin levels and blood volumes, clinical and laboratory practices applied in critically ill neonates, and current RBC transfusion practice guidelines. This is followed by a discussion of the most effective and promising nonpharmacological blood conservation strategies and techniques. Fortunately, many of these techniques are feasible in most neonatal intensive care units. When applied together, these techniques are more effective than existing pharmacotherapies in significantly decreasing neonatal RBC transfusions. They include increasing hemoglobin endowment and circulating blood volume at birth; removing less blood for laboratory testing; and optimizing nutrition. PMID:22818543

  18. Research on a Power Management System for Thermoelectric Generators to Drive Wireless Sensors on a Spindle Unit

    PubMed Central

    Li, Sheng; Yao, Xinhua; Fu, Jianzhong

    2014-01-01

    Thermoelectric energy harvesting is emerging as a promising alternative energy source to drive wireless sensors in mechanical systems. Typically, the waste heat from spindle units in machine tools creates potential for thermoelectric generation. However, the problem of low and fluctuant ambient temperature differences in spindle units limits the application of thermoelectric generation to drive a wireless sensor. This study is devoted to presenting a transformer-based power management system and its associated control strategy to make the wireless sensor work stably at different speeds of the spindle. The charging/discharging time of capacitors is optimized through this energy-harvesting strategy. A rotating spindle platform is set up to test the performance of the power management system at different speeds. The experimental results show that a longer sampling cycle time will increase the stability of the wireless sensor. The experiments also prove that utilizing the optimal time can make the power management system work more effectively compared with other systems using the same sample cycle. PMID:25033189

  19. Multiobjective optimization of temporal processes.

    PubMed

    Song, Zhe; Kusiak, Andrew

    2010-06-01

    This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.

  20. Optimal control of an invasive species using a reaction-diffusion model and linear programming

    USGS Publications Warehouse

    Bonneau, Mathieu; Johnson, Fred A.; Smith, Brian J.; Romagosa, Christina M.; Martin, Julien; Mazzotti, Frank J.

    2017-01-01

    Managing an invasive species is particularly challenging as little is generally known about the species’ biological characteristics in its new habitat. In practice, removal of individuals often starts before the species is studied to provide the information that will later improve control. Therefore, the locations and the amount of control have to be determined in the face of great uncertainty about the species characteristics and with a limited amount of resources. We propose framing spatial control as a linear programming optimization problem. This formulation, paired with a discrete reaction-diffusion model, permits calculation of an optimal control strategy that minimizes the remaining number of invaders for a fixed cost or that minimizes the control cost for containment or protecting specific areas from invasion. We propose computing the optimal strategy for a range of possible model parameters, representing current uncertainty on the possible invasion scenarios. Then, a best strategy can be identified depending on the risk attitude of the decision-maker. We use this framework to study the spatial control of the Argentine black and white tegus (Salvator merianae) in South Florida. There is uncertainty about tegu demography and we considered several combinations of model parameters, exhibiting various dynamics of invasion. For a fixed one-year budget, we show that the risk-averse strategy, which optimizes the worst-case scenario of tegus’ dynamics, and the risk-neutral strategy, which optimizes the expected scenario, both concentrated control close to the point of introduction. A risk-seeking strategy, which optimizes the best-case scenario, focuses more on models where eradication of the species in a cell is possible and consists of spreading control as much as possible. For the establishment of a containment area, assuming an exponential growth we show that with current control methods it might not be possible to implement such a strategy for some of the models that we considered. Including different possible models allows an examination of how the strategy is expected to perform in different scenarios. Then, a strategy that accounts for the risk attitude of the decision-maker can be designed.

  1. Meaning-making intervention during breast or colorectal cancer treatment improves self-esteem, optimism, and self-efficacy.

    PubMed

    Lee, Virginia; Robin Cohen, S; Edgar, Linda; Laizner, Andrea M; Gagnon, Anita J

    2006-06-01

    Existential issues often accompany a diagnosis of cancer and remain one aspect of psychosocial oncology care for which there is a need for focused, empirically tested interventions. This study examined the efficacy of a novel psychological intervention specifically designed to address existential issues through the use of meaning-making coping strategies on psychological adjustment to cancer. Eighty-two breast or colorectal cancer patients were randomly chosen to receive routine care (control group) or up to four sessions that explored the meaning of the emotional responses and cognitive appraisals of each individual's cancer experience within the context of past life events and future goals (experimental group). This paper reports the results from 74 patients who completed and returned pre- and post-test measures for self-esteem, optimism, and self-efficacy. After controlling for baseline scores, the experimental group participants demonstrated significantly higher levels of self-esteem, optimism, and self-efficacy compared to the control group. The results are discussed in light of the theoretical and clinical implications of meaning-making coping in the context of stress and illness.

  2. Postaudit of optimal conjunctive use policies

    USGS Publications Warehouse

    Nishikawa, Tracy; Martin, Peter; ,

    1998-01-01

    A simulation-optimization model was developed for the optimal management of the city of Santa Barbara's water resources during a drought; however, this model addressed only groundwater flow and not the advective-dispersive, density-dependent transport of seawater. Zero-m freshwater head constraints at the coastal boundary were used as surrogates for the control of seawater intrusion. In this study, the strategies derived from the simulation-optimization model using two surface water supply scenarios are evaluated using a two-dimensional, density-dependent groundwater flow and transport model. Comparisons of simulated chloride mass fractions are made between maintaining the actual pumping policies of the 1987-91 drought and implementing the optimal pumping strategies for each scenario. The results indicate that using 0-m freshwater head constraints allowed no more seawater intrusion than under actual 1987-91 drought conditions and that the simulation-optimization model yields least-cost strategies that deliver more water than under actual drought conditions while controlling seawater intrusion.

  3. Quantitative learning strategies based on word networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng

    2018-02-01

    Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.

  4. Dynamic simulation of a reverse Brayton refrigerator

    NASA Astrophysics Data System (ADS)

    Peng, N.; Lei, L. L.; Xiong, L. Y.; Tang, J. C.; Dong, B.; Liu, L. Q.

    2014-01-01

    A test refrigerator based on the modified Reverse Brayton cycle has been developed in the Chinese Academy of Sciences recently. To study the behaviors of this test refrigerator, a dynamic simulation has been carried out. The numerical model comprises the typical components of the test refrigerator: compressor, valves, heat exchangers, expander and heater. This simulator is based on the oriented-object approach and each component is represented by a set of differential and algebraic equations. The control system of the test refrigerator is also simulated, which can be used to optimize the control strategies. This paper describes all the models and shows the simulation results. Comparisons between simulation results and experimental data are also presented. Experimental validation on the test refrigerator gives satisfactory results.

  5. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT

    PubMed Central

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020

  6. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.

    PubMed

    Nie, Xiaohua; Wang, Wei; Nie, Haoyao

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  7. Control Strategies for Distributed Energy Resources to Maximize the Use of Wind Power in Rural Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Shuai; Elizondo, Marcelo A.; Samaan, Nader A.

    2011-10-10

    The focus of this paper is to design control strategies for distributed energy resources (DERs) to maximize the use of wind power in a rural microgrid. In such a system, it may be economical to harness wind power to reduce the consumption of fossil fuels for electricity production. In this work, we develop control strategies for DERs, including diesel generators, energy storage and demand response, to achieve high penetration of wind energy in a rural microgrid. Combinations of centralized (direct control) and decentralized (autonomous response) control strategies are investigated. Detailed dynamic models for a rural microgrid are built to conductmore » simulations. The system response to large disturbances and frequency regulation are tested. It is shown that optimal control coordination of DERs can be achieved to maintain system frequency while maximizing wind power usage and reducing the wear and tear on fossil fueled generators.« less

  8. Constructing and Deconstructing Concepts.

    PubMed

    Doan, Charles A; Vigo, Ronaldo

    2016-09-01

    Several empirical investigations have explored whether observers prefer to sort sets of multidimensional stimuli into groups by employing one-dimensional or family-resemblance strategies. Although one-dimensional sorting strategies have been the prevalent finding for these unsupervised classification paradigms, several researchers have provided evidence that the choice of strategy may depend on the particular demands of the task. To account for this disparity, we propose that observers extract relational patterns from stimulus sets that facilitate the development of optimal classification strategies for relegating category membership. We conducted a novel constrained categorization experiment to empirically test this hypothesis by instructing participants to either add or remove objects from presented categorical stimuli. We employed generalized representational information theory (GRIT; Vigo, 2011b , 2013a , 2014 ) and its associated formal models to predict and explain how human beings chose to modify these categorical stimuli. Additionally, we compared model performance to predictions made by a leading prototypicality measure in the literature.

  9. Role of the superior colliculus in choosing mixed-strategy saccades.

    PubMed

    Thevarajah, Dhushan; Mikulić, Areh; Dorris, Michael C

    2009-02-18

    Game theory outlines optimal response strategies during mixed-strategy competitions. The neural processes involved in choosing individual strategic actions, however, remain poorly understood. Here, we tested whether the superior colliculus (SC), a brain region critical for generating sensory-guided saccades, is also involved in choosing saccades under strategic conditions. Monkeys were free to choose either of two saccade targets as they competed against a computer opponent during the mixed-strategy game "matching pennies." The accuracy with which presaccadic SC activity predicted upcoming choice gradually increased in the time leading up to the saccade. Probing the SC with suprathreshold stimulation demonstrated that these evolving signals were functionally involved in preparing strategic saccades. Finally, subthreshold stimulation of the SC increased the likelihood that contralateral saccades were selected. Together, our results suggest that motor regions of the brain play an active role in choosing strategic actions rather than passively executing those prespecified by upstream executive regions.

  10. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  11. Strategy for Realizing High-Precision VUV Spectro-Polarimeter

    NASA Astrophysics Data System (ADS)

    Ishikawa, R.; Narukage, N.; Kubo, M.; Ishikawa, S.; Kano, R.; Tsuneta, S.

    2014-12-01

    Spectro-polarimetric observations in the vacuum ultraviolet (VUV) range are currently the only means to measure magnetic fields in the upper chromosphere and transition region of the solar atmosphere. The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) aims to measure linear polarization at the hydrogen Lyman- α line (121.6 nm). This measurement requires a polarization sensitivity better than 0.1 %, which is unprecedented in the VUV range. We here present a strategy with which to realize such high-precision spectro-polarimetry. This involves the optimization of instrument design, testing of optical components, extensive analyses of polarization errors, polarization calibration of the instrument, and calibration with onboard data. We expect that this strategy will aid the development of other advanced high-precision polarimeters in the UV as well as in other wavelength ranges.

  12. Latency reversal and viral clearance to cure HIV-1

    PubMed Central

    Margolis, David M.; Garcia, J. Victor; Hazuda, Daria J.; Haynes, Barton F.

    2016-01-01

    Research toward a cure for human immunodeficiency virus type 1 (HIV-1) infection has joined prevention and treatment efforts in the global public health agenda. A major approach to HIV eradication envisions antiretroviral suppression, paired with targeted therapies to enforce the expression of viral antigen from quiescent HIV-1 genomes, and immunotherapies to clear latent infection. These strategies are targeted to lead to viral eradication—a cure for AIDS. Paired testing of latency reversal and clearance strategies has begun, but additional obstacles to HIV eradication may emerge. Nevertheless, there is reason for optimism that advances in long-acting antiretroviral therapy and HIV prevention strategies will contribute to efforts in HIV cure research and that the implementation of these efforts will synergize to markedly blunt the effect of the HIV pandemic on society. PMID:27463679

  13. Zika virus RNA polymerase chain reaction on the utility channel of a commercial nucleic acid testing system.

    PubMed

    Boujnan, Mohamed; Duits, Ashley J; Koppelman, Marco H G M

    2018-03-01

    Several countries have implemented safety strategies to reduce the risk of Zika virus (ZIKV) transmission through blood transfusion. These strategies have included nucleic acid amplification testing (NAT) of blood donations. In this study, a new real-time polymerase chain reaction (PCR) assay including internal control for the detection of ZIKV on the cobas omni Utility Channel (UC) on the cobas 6800 system is presented. PCR conditions and primer/probe concentrations were optimized on the LightCycler 480 instrument. Optimized conditions were transferred to the cobas omni UC on the cobas 6800 system. Subsequently, the limit of detection (LOD) in plasma and urine, genotype inclusivity, specificity, cross-reactivity, and clinical sensitivity were determined. The 95% LOD of the ZIKV PCR assay on the cobas 6800 system was 23.0 IU/mL (95% confidence interval [CI], 16.5-37.5) in plasma and 24.5 IU/mL (95% CI, 13.4-92.9) in urine. The assay detected African and Asian lineages of ZIKV. The specificity was 100%. The clinical concordance between the newly developed ZIKV PCR assay and the investigational Roche cobas Zika NAT test was 83% (24/29). We developed a sensitive ZIKV PCR assay on the cobas omni UC on the cobas 6800 system. The assay can be used for large-scale screening of blood donations for ZIKV or for testing of blood donors returning from areas with ZIKV to avoid temporal deferral. This study also demonstrates that the cobas omni UC on the cobas 6800 system can be used for in-house-developed PCR assays. © 2018 AABB.

  14. Direct Fuel Injector Power Drive System Optimization

    DTIC Science & Technology

    2014-04-01

    solenoid coil to create magnetic field in the stator. Then, the stator pulls the pintle to open the injector nozzle . This pintle movement occurs when the...that typically deal with power strategies to the injector solenoid coil. Numerical simulation codes for diesel injection systems were developed by...Laboratory) for providing the JP-8 test fuel. REFERENCES 1. Digesu, P. and Laforgia D., “ Diesel electro- injector : A numerical simulation code”. Journal of

  15. Optimization of LC-Orbitrap-HRMS acquisition and MZmine 2 data processing for nontarget screening of environmental samples using design of experiments.

    PubMed

    Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias

    2016-11-01

    Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.

  16. Coordinative Voltage Control Strategy with Multiple Resources for Distribution Systems of High PV Penetration: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiangqi; Zhang, Yingchen

    This paper presents an optimal voltage control methodology with coordination among different voltage-regulating resources, including controllable loads, distributed energy resources such as energy storage and photovoltaics (PV), and utility voltage-regulating devices such as voltage regulators and capacitors. The proposed methodology could effectively tackle the overvoltage and voltage regulation device distortion problems brought by high penetrations of PV to improve grid operation reliability. A voltage-load sensitivity matrix and voltage-regulator sensitivity matrix are used to deploy the resources along the feeder to achieve the control objectives. Mixed-integer nonlinear programming is used to solve the formulated optimization control problem. The methodology has beenmore » tested on the IEEE 123-feeder test system, and the results demonstrate that the proposed approach could actively tackle the voltage problem brought about by high penetrations of PV and improve the reliability of distribution system operation.« less

  17. Optimizing the diagnostic testing of Clostridium difficile infection.

    PubMed

    Bouza, Emilio; Alcalá, Luis; Reigadas, Elena

    2016-09-01

    Clostridium difficile infection (CDI) is the leading cause of hospital-acquired diarrhea and is associated with a considerable health and cost burden. However, there is still not a clear consensus on the best laboratory diagnosis approach and a wide variation of testing methods and strategies can be encountered. We aim to review the most practical aspects of CDI diagnosis providing our own view on how to optimize CDI diagnosis. Expert commentary: Laboratory diagnosis in search of C. difficile toxins should be applied to all fecal diarrheic samples reaching the microbiology laboratory in patients > 2 years old, with or without classic risk factors for CDI. Detection of toxins either directly in the fecal sample or in the bacteria isolated in culture confirm CDI in the proper clinical setting. Nuclear Acid Assay techniques (NAAT) allow to speed up the process with epidemiological and therapeutic consequences.

  18. Energy-landscape paving for prediction of face-centered-cubic hydrophobic-hydrophilic lattice model proteins

    NASA Astrophysics Data System (ADS)

    Liu, Jingfa; Song, Beibei; Liu, Zhaoxia; Huang, Weibo; Sun, Yuanyuan; Liu, Wenjie

    2013-11-01

    Protein structure prediction (PSP) is a classical NP-hard problem in computational biology. The energy-landscape paving (ELP) method is a class of heuristic global optimization algorithm, and has been successfully applied to solving many optimization problems with complex energy landscapes in the continuous space. By putting forward a new update mechanism of the histogram function in ELP and incorporating the generation of initial conformation based on the greedy strategy and the neighborhood search strategy based on pull moves into ELP, an improved energy-landscape paving (ELP+) method is put forward. Twelve general benchmark instances are first tested on both two-dimensional and three-dimensional (3D) face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice models. The lowest energies by ELP+ are as good as or better than those of other methods in the literature for all instances. Then, five sets of larger-scale instances, denoted by S, R, F90, F180, and CASP target instances on the 3D FCC HP lattice model are tested. The proposed algorithm finds lower energies than those by the five other methods in literature. Not unexpectedly, this is particularly pronounced for the longer sequences considered. Computational results show that ELP+ is an effective method for PSP on the fcc HP lattice model.

  19. Optimization Strategies for Bruch's Membrane Opening Minimum Rim Area Calculation: Sequential versus Simultaneous Minimization.

    PubMed

    Enders, Philip; Adler, Werner; Schaub, Friederike; Hermann, Manuel M; Diestelhorst, Michael; Dietlein, Thomas; Cursiefen, Claus; Heindl, Ludwig M

    2017-10-24

    To compare a simultaneously optimized continuous minimum rim surface parameter between Bruch's membrane opening (BMO) and the internal limiting membrane to the standard sequential minimization used for calculating the BMO minimum rim area in spectral domain optical coherence tomography (SD-OCT). In this case-control, cross-sectional study, 704 eyes of 445 participants underwent SD-OCT of the optic nerve head (ONH), visual field testing, and clinical examination. Globally and clock-hour sector-wise optimized BMO-based minimum rim area was calculated independently. Outcome parameters included BMO-globally optimized minimum rim area (BMO-gMRA) and sector-wise optimized BMO-minimum rim area (BMO-MRA). BMO area was 1.89 ± 0.05 mm 2 . Mean global BMO-MRA was 0.97 ± 0.34 mm 2 , mean global BMO-gMRA was 1.01 ± 0.36 mm 2 . Both parameters correlated with r = 0.995 (P < 0.001); mean difference was 0.04 mm 2 (P < 0.001). In all sectors, parameters differed by 3.0-4.2%. In receiver operating characteristics, the calculated area under the curve (AUC) to differentiate glaucoma was 0.873 for BMO-MRA, compared to 0.866 for BMO-gMRA (P = 0.004). Among ONH sectors, the temporal inferior location showed the highest AUC. Optimization strategies to calculate BMO-based minimum rim area led to significantly different results. Imposing an additional adjacency constraint within calculation of BMO-MRA does not improve diagnostic power. Global and temporal inferior BMO-MRA performed best in differentiating glaucoma patients.

  20. RTDS implementation of an improved sliding mode based inverter controller for PV system.

    PubMed

    Islam, Gazi; Muyeen, S M; Al-Durra, Ahmed; Hasanien, Hany M

    2016-05-01

    This paper proposes a novel approach for testing dynamics and control aspects of a large scale photovoltaic (PV) system in real time along with resolving design hindrances of controller parameters using Real Time Digital Simulator (RTDS). In general, the harmonic profile of a fast controller has wide distribution due to the large bandwidth of the controller. The major contribution of this paper is that the proposed control strategy gives an improved voltage harmonic profile and distribute it more around the switching frequency along with fast transient response; filter design, thus, becomes easier. The implementation of a control strategy with high bandwidth in small time steps of Real Time Digital Simulator (RTDS) is not straight forward. This paper shows a good methodology for the practitioners to implement such control scheme in RTDS. As a part of the industrial process, the controller parameters are optimized using particle swarm optimization (PSO) technique to improve the low voltage ride through (LVRT) performance under network disturbance. The response surface methodology (RSM) is well adapted to build analytical models for recovery time (Rt), maximum percentage overshoot (MPOS), settling time (Ts), and steady state error (Ess) of the voltage profile immediate after inverter under disturbance. A systematic approach of controller parameter optimization is detailed. The transient performance of the PSO based optimization method applied to the proposed sliding mode controlled PV inverter is compared with the results from genetic algorithm (GA) based optimization technique. The reported real time implementation challenges and controller optimization procedure are applicable to other control applications in the field of renewable and distributed generation systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Universal Versus Targeted Screening for Lynch Syndrome: Comparing Ascertainment and Costs Based on Clinical Experience.

    PubMed

    Erten, Mujde Z; Fernandez, Luca P; Ng, Hank K; McKinnon, Wendy C; Heald, Brandie; Koliba, Christopher J; Greenblatt, Marc S

    2016-10-01

    Strategies to screen colorectal cancers (CRCs) for Lynch syndrome are evolving rapidly; the optimal strategy remains uncertain. We compared targeted versus universal screening of CRCs for Lynch syndrome. In 2010-2011, we employed targeted screening (age < 60 and/or Bethesda criteria). From 2012 to 2014, we screened all CRCs. Immunohistochemistry for the four mismatch repair proteins was done in all cases, followed by other diagnostic studies as indicated. We modeled the diagnostic costs of detecting Lynch syndrome and estimated the 5-year costs of preventing CRC by colonoscopy screening, using a system dynamics model. Using targeted screening, 51/175 (29 %) cancers fit criteria and were tested by immunohistochemistry; 15/51 (29 %, or 8.6 % of all CRCs) showed suspicious loss of ≥1 mismatch repair protein. Germline mismatch repair gene mutations were found in 4/4 cases sequenced (11 suspected cases did not have germline testing). Using universal screening, 17/292 (5.8 %) screened cancers had abnormal immunohistochemistry suspicious for Lynch syndrome. Germline mismatch repair mutations were found in only 3/10 cases sequenced (7 suspected cases did not have germline testing). The mean cost to identify Lynch syndrome probands was ~$23,333/case for targeted screening and ~$175,916/case for universal screening at our institution. Estimated costs to identify and screen probands and relatives were: targeted, $9798/case and universal, $38,452/case. In real-world Lynch syndrome management, incomplete clinical follow-up was the major barrier to do genetic testing. Targeted screening costs 2- to 7.5-fold less than universal and rarely misses Lynch syndrome cases. Future changes in testing costs will likely change the optimal algorithm.

  2. An optimizing start-up strategy for a bio-methanator.

    PubMed

    Sbarciog, Mihaela; Loccufier, Mia; Vande Wouwer, Alain

    2012-05-01

    This paper presents an optimizing start-up strategy for a bio-methanator. The goal of the control strategy is to maximize the outflow rate of methane in anaerobic digestion processes, which can be described by a two-population model. The methodology relies on a thorough analysis of the system dynamics and involves the solution of two optimization problems: steady-state optimization for determining the optimal operating point and transient optimization. The latter is a classical optimal control problem, which can be solved using the maximum principle of Pontryagin. The proposed control law is of the bang-bang type. The process is driven from an initial state to a small neighborhood of the optimal steady state by switching the manipulated variable (dilution rate) from the minimum to the maximum value at a certain time instant. Then the dilution rate is set to the optimal value and the system settles down in the optimal steady state. This control law ensures the convergence of the system to the optimal steady state and substantially increases its stability region. The region of attraction of the steady state corresponding to maximum production of methane is considerably enlarged. In some cases, which are related to the possibility of selecting the minimum dilution rate below a certain level, the stability region of the optimal steady state equals the interior of the state space. Aside its efficiency, which is evaluated not only in terms of biogas production but also from the perspective of treatment of the organic load, the strategy is also characterized by simplicity, being thus appropriate for implementation in real-life systems. Another important advantage is its generality: this technique may be applied to any anaerobic digestion process, for which the acidogenesis and methanogenesis are, respectively, characterized by Monod and Haldane kinetics.

  3. Optimizing Telehealth Strategies for Subspecialty Care: Recommendations from Rural Pediatricians

    PubMed Central

    Demirci, Jill R.; Bogen, Debra L.; Mehrotra, Ateev; Miller, Elizabeth

    2015-01-01

    Abstract Background: Telehealth offers strategies to improve access to subspecialty care for children in rural communities. Rural pediatrician experiences and preferences regarding the use of these telehealth strategies for children's subspecialty care needs are not known. We elicited rural pediatrician experiences and preferences regarding different pediatric subspecialty telehealth strategies. Materials and Methods: Seventeen semistructured telephone interviews were conducted with rural pediatricians from 17 states within the United States. Interviewees were recruited by e-mails to a pediatric rural health listserv and to rural pediatricians identified through snowball sampling. Themes were identified through thematic analysis of interview transcripts. Institutional Review Board approval was obtained. Results: Rural pediatricians identified several telehealth strategies to improve access to subspecialty care, including physician access hotlines, remote electronic medical record access, electronic messaging systems, live video telemedicine, and telehealth triage systems. Rural pediatricians provided recommendations for optimizing the utility of each of these strategies based on their experiences with different systems. Rural pediatricians preferred specific telehealth strategies for specific clinical contexts, resulting in a proposed framework describing the complementary role of different telehealth strategies for pediatric subspecialty care. Finally, rural pediatricians identified additional benefits associated with the use of telehealth strategies and described a desire for telehealth systems that enhanced (rather than replaced) personal relationships between rural pediatricians and subspecialists. Conclusions: Rural pediatricians described complementary roles for different subspecialty care telehealth strategies. Additionally, rural pediatricians provided recommendations for optimizing individual telehealth strategies. Input from rural pediatricians will be crucial for optimizing specific telehealth strategies and designing effective telehealth systems. PMID:25919585

  4. A new strategy of glucose supply in a microbial fermentation model

    NASA Astrophysics Data System (ADS)

    Kasbawati, Gunawan, A. Y.; Sidarto, K. A.; Hertadi, R.

    2015-09-01

    Strategy of glucose supply to achieve an optimal productivity of ethanol production of a yeast cell is one of the main features in a microbial fermentation process. Beside a known continuous glucose supply, in this study we consider a new supply strategy so called the on-off supply. An optimal control theory is applied to the fermentation system to find the optimal rate of glucose supply and time of supply. The optimization problem is solved numerically using Differential Evolutionary algorithm. We find two alternative solutions that we can choose to get the similar result: either long period process with low supply or short period process with high glucose supply.

  5. Numerical solution of a conspicuous consumption model with constant control delay☆

    PubMed Central

    Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea

    2011-01-01

    We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871

  6. Optimal dividends in the Brownian motion risk model with interest

    NASA Astrophysics Data System (ADS)

    Fang, Ying; Wu, Rong

    2009-07-01

    In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.

  7. Cost-effectiveness of angiographic imaging in isolated perimesencephalic subarachnoid hemorrhage.

    PubMed

    Kalra, Vivek B; Wu, Xiao; Forman, Howard P; Malhotra, Ajay

    2014-12-01

    The purpose of this study is to perform a comprehensive cost-effectiveness analysis of all possible permutations of computed tomographic angiography (CTA) and digital subtraction angiography imaging strategies for both initial diagnosis and follow-up imaging in patients with perimesencephalic subarachnoid hemorrhage on noncontrast CT. Each possible imaging strategy was evaluated in a decision tree created with TreeAge Pro Suite 2014, with parameters derived from a meta-analysis of 40 studies and literature values. Base case and sensitivity analyses were performed to assess the cost-effectiveness of each strategy. A Monte Carlo simulation was conducted with distributional variables to evaluate the robustness of the optimal strategy. The base case scenario showed performing initial CTA with no follow-up angiographic studies in patients with perimesencephalic subarachnoid hemorrhage to be the most cost-effective strategy ($5422/quality adjusted life year). Using a willingness-to-pay threshold of $50 000/quality adjusted life year, the most cost-effective strategy based on net monetary benefit is CTA with no follow-up when the sensitivity of initial CTA is >97.9%, and CTA with CTA follow-up otherwise. The Monte Carlo simulation reported CTA with no follow-up to be the optimal strategy at willingness-to-pay of $50 000 in 99.99% of the iterations. Digital subtraction angiography, whether at initial diagnosis or as part of follow-up imaging, is never the optimal strategy in our model. CTA without follow-up imaging is the optimal strategy for evaluation of patients with perimesencephalic subarachnoid hemorrhage when modern CT scanners and a strict definition of perimesencephalic subarachnoid hemorrhage are used. Digital subtraction angiography and follow-up imaging are not optimal as they carry complications and associated costs. © 2014 American Heart Association, Inc.

  8. Optimal reconfiguration strategy for a degradable multimodule computing system

    NASA Technical Reports Server (NTRS)

    Lee, Yann-Hang; Shin, Kang G.

    1987-01-01

    The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.

  9. Lose-Shift Responding in Humans Is Promoted by Increased Cognitive Load

    PubMed Central

    Ivan, Victorita E.; Banks, Parker J.; Goodfellow, Kris; Gruber, Aaron J.

    2018-01-01

    The propensity of animals to shift choices immediately after unexpectedly poor reinforcement outcomes is a pervasive strategy across species and tasks. We report here on the memory supporting such lose-shift responding in humans, assessed using a binary choice task in which random responding is the optimal strategy. Participants exhibited little lose-shift responding when fully attending to the task, but this increased by 30%–40% in participants that performed with additional cognitive load that is known to tax executive systems. Lose-shift responding in the cognitively loaded adults persisted throughout the testing session, despite being a sub-optimal strategy, but was less likely as the time increased between reinforcement and the subsequent choice. Furthermore, children (5–9 years old) without load performed similarly to the cognitively loaded adults. This effect disappeared in older children aged 11–13 years old. These data provide evidence supporting our hypothesis that lose-shift responding is a default and reflexive strategy in the mammalian brain, likely mediated by a decaying memory trace, and is normally suppressed by executive systems. Reducing the efficacy of executive control by cognitive load (adults) or underdevelopment (children) increases its prevalence. It may therefore be an important component to consider when interpreting choice data, and may serve as an objective behavioral assay of executive function in humans that is easy to measure. PMID:29568264

  10. Optimization as a Reasoning Strategy for Dealing with Socioscientific Decision-Making Situations

    ERIC Educational Resources Information Center

    Papadouris, Nicos

    2012-01-01

    This paper reports on an attempt to help 12-year-old students develop a specific optimization strategy for selecting among possible solutions in socioscientific decision-making situations. We have developed teaching and learning materials for elaborating this strategy, and we have implemented them in two intact classes (N = 48). Prior to and after…

  11. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  12. The Linear Quadratic Gaussian Multistage Game with Nonclassical Information Pattern Using a Direct Solution Method

    NASA Astrophysics Data System (ADS)

    Clemens, Joshua William

    Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.

  13. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    PubMed

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  14. An Action Research to Optimize the Well-Being of Older People in Nursing Homes: Challenges and Strategies for Implementing a Complex Intervention.

    PubMed

    Bourbonnais, Anne; Ducharme, Francine; Landreville, Philippe; Michaud, Cécile; Gauthier, Marie-Andrée; Lavallée, Marie-Hélène

    2018-03-01

    Few studies have been conducted on strategies to promote the implementation of complex interventions in nursing homes (NHs). This article presents a pilot study intended to assess the strategies that would enable the optimal implementation of a complex intervention approach in NHs based on the meanings of screams of older people living with Alzheimer's disease. An action research approach was used with 19 formal and family caregivers from five NHs. Focus groups and individual interviews were held to assess different implementation strategies. A number of challenges were identified, as were strategies to overcome them. These latter included interactive training, intervention design, and external support. This study shows the feasibility of implementing a complex intervention to optimize older people's well-being. The article shares strategies that may promote the implementation of these types of interventions in NHs.

  15. Transaction fees and optimal rebalancing in the growth-optimal portfolio

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng

    2011-05-01

    The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.

  16. Associations of perceived social support and positive psychological resources with fatigue symptom in patients with rheumatoid arthritis

    PubMed Central

    Xu, NeiLi; Zhao, Shuai; Xue, HongXia; Fu, WenYi; Liu, Li; Zhang, TianQi; Huang, Rui; Zhang, Ning

    2017-01-01

    Objective This study aimed to assess the association between perceived social support (PSS) and fatigue and the roles of hope, optimism, general self-efficacy and resilience as mediators or moderators on PSS-fatigue association among Rheumatoid Arthritis (RA) patients in China. Methods A multi-center, cross-sectional study was conducted withinpatients diagnosed with RA in northeast China, in which 305 eligible inpatients were enrolled. The Multidimensional Fatigue Inventory, Multidimensional Scale of Perceived Social Support, Herth Hope Index, Life Orientation Test Revised, General Self-Efficacy Scale and Ego-Resiliency Scale were completed. The associations of PSS, hope, optimism, general self-efficacy and resilience with fatigue and the moderating roles of these positive psychological constructs were tested by hierarchical linear regression. Asymptotic and resampling strategies were utilized to assess the mediating roles of hope, optimism, general self-efficacy and resilience. Results The mean score of the MFI was 57.88 (SD = 9.50). PSS, hope, optimism and resilience were negatively associated with RA-related fatigue, whereas DAS28-CRP was positively associated. Only resilience positively moderated the PSS-fatigue association (B = 0.03, β = 0.13, P<0.01). Hope, optimism and resilience may act as partial mediators in the association between PSS and fatigue symptoms (hope: a*b = -0.16, BCa 95%CI: -0.27, -0.03; optimism: a*b = -0.20, BCa 95%CI: -0.30, -0.10; resilience: a*b = -0.12, BCa 95%CI: -0.21–0.04). Conclusions Fatigue is a severe symptom among RA patients. Resilience may positively moderate the PSS-fatigue association. Hope, optimism and resilience may act as partial mediators in the association. PSS, hope, optimism and resilience may contribute as effective recourses to alleviate fatigue, upon which PSS probably has the greatest effect. PMID:28291837

  17. Associations of perceived social support and positive psychological resources with fatigue symptom in patients with rheumatoid arthritis.

    PubMed

    Xu, NeiLi; Zhao, Shuai; Xue, HongXia; Fu, WenYi; Liu, Li; Zhang, TianQi; Huang, Rui; Zhang, Ning

    2017-01-01

    This study aimed to assess the association between perceived social support (PSS) and fatigue and the roles of hope, optimism, general self-efficacy and resilience as mediators or moderators on PSS-fatigue association among Rheumatoid Arthritis (RA) patients in China. A multi-center, cross-sectional study was conducted withinpatients diagnosed with RA in northeast China, in which 305 eligible inpatients were enrolled. The Multidimensional Fatigue Inventory, Multidimensional Scale of Perceived Social Support, Herth Hope Index, Life Orientation Test Revised, General Self-Efficacy Scale and Ego-Resiliency Scale were completed. The associations of PSS, hope, optimism, general self-efficacy and resilience with fatigue and the moderating roles of these positive psychological constructs were tested by hierarchical linear regression. Asymptotic and resampling strategies were utilized to assess the mediating roles of hope, optimism, general self-efficacy and resilience. The mean score of the MFI was 57.88 (SD = 9.50). PSS, hope, optimism and resilience were negatively associated with RA-related fatigue, whereas DAS28-CRP was positively associated. Only resilience positively moderated the PSS-fatigue association (B = 0.03, β = 0.13, P<0.01). Hope, optimism and resilience may act as partial mediators in the association between PSS and fatigue symptoms (hope: a*b = -0.16, BCa 95%CI: -0.27, -0.03; optimism: a*b = -0.20, BCa 95%CI: -0.30, -0.10; resilience: a*b = -0.12, BCa 95%CI: -0.21-0.04). Fatigue is a severe symptom among RA patients. Resilience may positively moderate the PSS-fatigue association. Hope, optimism and resilience may act as partial mediators in the association. PSS, hope, optimism and resilience may contribute as effective recourses to alleviate fatigue, upon which PSS probably has the greatest effect.

  18. Nonclinical safety testing of biopharmaceuticals--Addressing current challenges of these novel and emerging therapies.

    PubMed

    Brennan, Frank R; Baumann, Andreas; Blaich, Guenter; de Haan, Lolke; Fagg, Rajni; Kiessling, Andrea; Kronenberg, Sven; Locher, Mathias; Milton, Mark; Tibbitts, Jay; Ulrich, Peter; Weir, Lucinda

    2015-10-01

    Non-clinical safety testing of biopharmaceuticals can present significant challenges to human risk assessment with these often innovative and complex drugs. Hot Topics in this field were discussed recently at the 4th Annual European Biosafe General Membership meeting. In this feature article, the presentations and subsequent discussions from the main sessions are summarized. The topics covered include: (i) wanted versus unwanted immune activation, (ii) bi-specific protein scaffolds, (iii) use of Pharmacokinetic (PK)/Pharmacodynamic (PD) data to impact/optimize toxicology study design, (iv) cytokine release and challenges to human translation (v) safety testing of cell and gene therapies including chimeric antigen receptor T (CAR-T) cells and retroviral vectors and (vi) biopharmaceutical development strategies encompassing a range of diverse topics including optimizing entry of monoclonal antibodies (mAbs) into the brain, safety testing of therapeutic vaccines, non-clinical testing of biosimilars, infection in toxicology studies with immunomodulators and challenges to human risk assessment, maternal and infant anti-drug antibody (ADA) development and impact in non-human primate (NHP) developmental toxicity studies, and a summary of an NC3Rs workshop on the future vision for non-clinical safety assessment of biopharmaceuticals. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    PubMed

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  20. Derivative Trade Optimizing Model Utilizing GP Based on Behavioral Finance Theory

    NASA Astrophysics Data System (ADS)

    Matsumura, Koki; Kawamoto, Masaru

    This paper proposed a new technique which makes the strategy trees for the derivative (option) trading investment decision based on the behavioral finance theory and optimizes it using evolutionary computation, in order to achieve high profitability. The strategy tree uses a technical analysis based on a statistical, experienced technique for the investment decision. The trading model is represented by various technical indexes, and the strategy tree is optimized by the genetic programming(GP) which is one of the evolutionary computations. Moreover, this paper proposed a method using the prospect theory based on the behavioral finance theory to set psychological bias for profit and deficit and attempted to select the appropriate strike price of option for the higher investment efficiency. As a result, this technique produced a good result and found the effectiveness of this trading model by the optimized dealings strategy.

  1. Performance tradeoffs in static and dynamic load balancing strategies

    NASA Technical Reports Server (NTRS)

    Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.

    1986-01-01

    The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.

  2. Thermal and energy battery management optimization in electric vehicles using Pontryagin's maximum principle

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Suchaneck, Andre; Puente León, Fernando

    2014-01-01

    Depending on the actual battery temperature, electrical power demands in general have a varying impact on the life span of a battery. As electrical energy provided by the battery is needed to temper it, the question arises at which temperature which amount of energy optimally should be utilized for tempering. Therefore, the objective function that has to be optimized contains both the goal to maximize life expectancy and to minimize the amount of energy used for obtaining the first goal. In this paper, Pontryagin's maximum principle is used to derive a causal control strategy from such an objective function. The derivation of the causal strategy includes the determination of major factors that rule the optimal solution calculated with the maximum principle. The optimization is calculated offline on a desktop computer for all possible vehicle parameters and major factors. For the practical implementation in the vehicle, it is sufficient to have the values of the major factors determined only roughly in advance and the offline calculation results available. This feature sidesteps the drawback of several optimization strategies that require the exact knowledge of the future power demand. The resulting strategy's application is not limited to batteries in electric vehicles.

  3. Comparison of Two Multidisciplinary Optimization Strategies for Launch-Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Powell, R. W.; Lepsch, R. A.; Stanley, D. O.; Kroo, I. M.

    1995-01-01

    The investigation focuses on development of a rapid multidisciplinary analysis and optimization capability for launch-vehicle design. Two multidisciplinary optimization strategies in which the analyses are integrated in different manners are implemented and evaluated for solution of a single-stage-to-orbit launch-vehicle design problem. Weights and sizing, propulsion, and trajectory issues are directly addressed in each optimization process. Additionally, the need to maintain a consistent vehicle model across the disciplines is discussed. Both solution strategies were shown to obtain similar solutions from two different starting points. These solutions suggests that a dual-fuel, single-stage-to-orbit vehicle with a dry weight of approximately 1.927 x 10(exp 5)lb, gross liftoff weight of 2.165 x 10(exp 6)lb, and length of 181 ft is attainable. A comparison of the two approaches demonstrates that treatment or disciplinary coupling has a direct effect on optimization convergence and the required computational effort. In comparison with the first solution strategy, which is of the general form typically used within the launch vehicle design community at present, the second optimization approach is shown to he 3-4 times more computationally efficient.

  4. A simple and fast heuristic for protein structure comparison

    PubMed Central

    Pelta, David A; González, Juan R; Moreno Vega, Marcos

    2008-01-01

    Background Protein structure comparison is a key problem in bioinformatics. There exist several methods for doing protein comparison, being the solution of the Maximum Contact Map Overlap problem (MAX-CMO) one of the alternatives available. Although this problem may be solved using exact algorithms, researchers require approximate algorithms that obtain good quality solutions using less computational resources than the formers. Results We propose a variable neighborhood search metaheuristic for solving MAX-CMO. We analyze this strategy in two aspects: 1) from an optimization point of view the strategy is tested on two different datasets, obtaining an error of 3.5%(over 2702 pairs) and 1.7% (over 161 pairs) with respect to optimal values; thus leading to high accurate solutions in a simpler and less expensive way than exact algorithms; 2) in terms of protein structure classification, we conduct experiments on three datasets and show that is feasible to detect structural similarities at SCOP's family and CATH's architecture levels using normalized overlap values. Some limitations and the role of normalization are outlined for doing classification at SCOP's fold level. Conclusion We designed, implemented and tested.a new tool for solving MAX-CMO, based on a well-known metaheuristic technique. The good balance between solution's quality and computational effort makes it a valuable tool. Moreover, to the best of our knowledge, this is the first time the MAX-CMO measure is tested at SCOP's fold and CATH's architecture levels with encouraging results. Software is available for download at . PMID:18366735

  5. Environmental context explains Lévy and Brownian movement patterns of marine predators.

    PubMed

    Humphries, Nicolas E; Queiroz, Nuno; Dyer, Jennifer R M; Pade, Nicolas G; Musyl, Michael K; Schaefer, Kurt M; Fuller, Daniel W; Brunnschweiler, Juerg M; Doyle, Thomas K; Houghton, Jonathan D R; Hays, Graeme C; Jones, Catherine S; Noble, Leslie R; Wearmouth, Victoria J; Southall, Emily J; Sims, David W

    2010-06-24

    An optimal search theory, the so-called Lévy-flight foraging hypothesis, predicts that predators should adopt search strategies known as Lévy flights where prey is sparse and distributed unpredictably, but that Brownian movement is sufficiently efficient for locating abundant prey. Empirical studies have generated controversy because the accuracy of statistical methods that have been used to identify Lévy behaviour has recently been questioned. Consequently, whether foragers exhibit Lévy flights in the wild remains unclear. Crucially, moreover, it has not been tested whether observed movement patterns across natural landscapes having different expected resource distributions conform to the theory's central predictions. Here we use maximum-likelihood methods to test for Lévy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose. Strong support was found for Lévy search patterns across 14 species of open-ocean predatory fish (sharks, tuna, billfish and ocean sunfish), with some individuals switching between Lévy and Brownian movement as they traversed different habitat types. We tested the spatial occurrence of these two principal patterns and found Lévy behaviour to be associated with less productive waters (sparser prey) and Brownian movements to be associated with productive shelf or convergence-front habitats (abundant prey). These results are consistent with the Lévy-flight foraging hypothesis, supporting the contention that organism search strategies naturally evolved in such a way that they exploit optimal Lévy patterns.

  6. The optimal strategy of percutaneous coronary intervention for ST-elevation myocardial infarction patients with multivessel disease: an updated meta-analysis of 9 randomized controlled trials.

    PubMed

    Fan, Zhong G; Gao, Xiao F; Li, Xiao B; Mao, Wen X; Chen, Li W; Tian, Nai L

    2017-04-01

    The optimal strategy of percutaneous coronary intervention (PCI) for patients with ST-elevation myocardial infarction (STEMI) and multivessel disease (MVD) still remains controversial. This study sought to explore the optimal PCI strategy for those patients. Medline, EMBASE and the Cochrane Controlled Trials Registry were searched for relevant studies. We analyzed the comparison of major adverse cardiac events (MACEs) as the primary end point between the preventive PCI strategy and the culprit only PCI strategy (CV-PCI). The further analysis of two subgroups described as the complete multivessel PCI strategy during primary procedure (CMV-PCI) and the staged PCI strategy (S-PCI) was also performed. Nine randomized trials were identified. The risk of MACEs was reduced significantly regarding to preventive PCI strategy (OR=0.41, 95% CI: 0.31-0.53, P<0.001) compared to CV-PCI strategy. There were lower risks of long-term mortality, reinfarction and repeat revascularization in the preventive PCI group compared to the CV-PCI group (OR=0.41, 95% CI: 0.27-0.62, P<0.001; OR=0.54, 95% CI: 0.32-0.91, P=0.021; OR=0.37, 95% CI: 0.26-0.51, P<0.001). Subgroup analysis showed that staged PCI strategy reduced the incidence of long-term mortality versus CMV-PCI strategy. The preventive PCI is associated with the lower risk of MACEs in STEMI patients with MVD compared to the CV-PCI strategy, and the S-PCI strategy seems to be an optimal choice for these patients rather than the CMV-PCI.

  7. Search Algorithms as a Framework for the Optimization of Drug Combinations

    PubMed Central

    Coquin, Laurence; Schofield, Jennifer; Feala, Jacob D.; Reed, John C.; McCulloch, Andrew D.; Paternostro, Giovanni

    2008-01-01

    Combination therapies are often needed for effective clinical outcomes in the management of complex diseases, but presently they are generally based on empirical clinical experience. Here we suggest a novel application of search algorithms—originally developed for digital communication—modified to optimize combinations of therapeutic interventions. In biological experiments measuring the restoration of the decline with age in heart function and exercise capacity in Drosophila melanogaster, we found that search algorithms correctly identified optimal combinations of four drugs using only one-third of the tests performed in a fully factorial search. In experiments identifying combinations of three doses of up to six drugs for selective killing of human cancer cells, search algorithms resulted in a highly significant enrichment of selective combinations compared with random searches. In simulations using a network model of cell death, we found that the search algorithms identified the optimal combinations of 6–9 interventions in 80–90% of tests, compared with 15–30% for an equivalent random search. These findings suggest that modified search algorithms from information theory have the potential to enhance the discovery of novel therapeutic drug combinations. This report also helps to frame a biomedical problem that will benefit from an interdisciplinary effort and suggests a general strategy for its solution. PMID:19112483

  8. A comparison between metaheuristics as strategies for minimizing cyclic instability in Ambient Intelligence.

    PubMed

    Romero, Leoncio A; Zamudio, Victor; Baltazar, Rosario; Mezura, Efren; Sotelo, Marco; Callaghan, Vic

    2012-01-01

    In this paper we present a comparison between six novel approaches to the fundamental problem of cyclic instability in Ambient Intelligence. These approaches are based on different optimization algorithms, Particle Swarm Optimization (PSO), Bee Swarm Optimization (BSO), micro Particle Swarm Optimization (μ-PSO), Artificial Immune System (AIS), Genetic Algorithm (GA) and Mutual Information Maximization for Input Clustering (MIMIC). In order to be able to use these algorithms, we introduced the concept of Average Cumulative Oscillation (ACO), which enabled us to measure the average behavior of the system. This approach has the advantage that it does not need to analyze the topological properties of the system, in particular the loops, which can be computationally expensive. In order to test these algorithms we used the well-known discrete system called the Game of Life for 9, 25, 49 and 289 agents. It was found that PSO and μ-PSO have the best performance in terms of the number of agents locked. These results were confirmed using the Wilcoxon Signed Rank Test. This novel and successful approach is very promising and can be used to remove instabilities in real scenarios with a large number of agents (including nomadic agents) and complex interactions and dependencies among them.

  9. A Comparison between Metaheuristics as Strategies for Minimizing Cyclic Instability in Ambient Intelligence

    PubMed Central

    Romero, Leoncio A.; Zamudio, Victor; Baltazar, Rosario; Mezura, Efren; Sotelo, Marco; Callaghan, Vic

    2012-01-01

    In this paper we present a comparison between six novel approaches to the fundamental problem of cyclic instability in Ambient Intelligence. These approaches are based on different optimization algorithms, Particle Swarm Optimization (PSO), Bee Swarm Optimization (BSO), micro Particle Swarm Optimization (μ-PSO), Artificial Immune System (AIS), Genetic Algorithm (GA) and Mutual Information Maximization for Input Clustering (MIMIC). In order to be able to use these algorithms, we introduced the concept of Average Cumulative Oscillation (ACO), which enabled us to measure the average behavior of the system. This approach has the advantage that it does not need to analyze the topological properties of the system, in particular the loops, which can be computationally expensive. In order to test these algorithms we used the well-known discrete system called the Game of Life for 9, 25, 49 and 289 agents. It was found that PSO and μ-PSO have the best performance in terms of the number of agents locked. These results were confirmed using the Wilcoxon Signed Rank Test. This novel and successful approach is very promising and can be used to remove instabilities in real scenarios with a large number of agents (including nomadic agents) and complex interactions and dependencies among them. PMID:23112643

  10. Convex Optimization over Classes of Multiparticle Entanglement

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Gühne, Otfried

    2018-02-01

    A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010), 10.1103/PhysRevLett.105.130501].

  11. Optimal PID gain schedule for hydrogenerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orelind, G.; Wozniak, L.; Medanic, J.

    1989-09-01

    This paper describes the development and testing of a digital gain switching governor for hydrogenerators. Optimal gains were found at different load points by minimizing a quadratic performance criterion prior to controller operating. During operation, the gain sets are switched in depending on the gate position and speed error magnitude. With gain switching operating, the digital governor was shown to have a substantial reduction of noise on the command signal and up to 42% faster responses to power requests. Non-linear control strategies enabled the digital governor to have a 2.5% to 2% reduction in speed overshoot on startups, and anmore » 8% to 1% reduction in undershoot on load rejections as compared to the analog.« less

  12. An effective rumor-containing strategy

    NASA Astrophysics Data System (ADS)

    Pan, Cheng; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan

    2018-06-01

    False rumors can lead to huge economic losses or/and social instability. Hence, mitigating the impact of bogus rumors is of primary importance. This paper focuses on the problem of how to suppress a false rumor by use of the truth. Based on a set of rational hypotheses and a novel rumor-truth mixed spreading model, the effectiveness and cost of a rumor-containing strategy are quantified, respectively. On this basis, the original problem is modeled as a constrained optimization problem (the RC model), in which the independent variable and the objective function represent a rumor-containing strategy and the effectiveness of a rumor-containing strategy, respectively. The goal of the optimization problem is to find the most effective rumor-containing strategy subject to a limited rumor-containing budget. Some optimal rumor-containing strategies are given by solving their respective RC models. The influence of different factors on the highest cost effectiveness of a RC model is illuminated through computer experiments. The results obtained are instructive to develop effective rumor-containing strategies.

  13. A Novel Paradigm for Computer-Aided Design: TRIZ-Based Hybridization of Topologically Optimized Density Distributions

    NASA Astrophysics Data System (ADS)

    Cardillo, A.; Cascini, G.; Frillici, F. S.; Rotini, F.

    In a recent project the authors have proposed the adoption of Optimization Systems [1] as a bridging element between Computer-Aided Innovation (CAI) and PLM to identify geometrical contradictions [2], a particular case of the TRIZ physical contradiction [3]. A further development of the research [4] has revealed that the solutions obtained from several topological optimizations can be considered as elementary customized modeling features for a specific design task. The topology overcoming the arising geometrical contradiction can be obtained through a manipulation of the density distributions constituting the conflicting pair. Already two strategies of density combination have been identified as capable to solve geometrical contradictions and several others are under extended testing. The paper illustrates the most recent results of the ongoing research mainly related to the extension of the algorithms from 2D to 3D design spaces. The whole approach is clarified by means of two detailed examples, where the proposed technique is compared with classical multi-goal optimization.

  14. Biogeography-based particle swarm optimization with fuzzy elitism and its applications to constrained engineering problems

    NASA Astrophysics Data System (ADS)

    Guo, Weian; Li, Wuzhao; Zhang, Qun; Wang, Lei; Wu, Qidi; Ren, Hongliang

    2014-11-01

    In evolutionary algorithms, elites are crucial to maintain good features in solutions. However, too many elites can make the evolutionary process stagnate and cannot enhance the performance. This article employs particle swarm optimization (PSO) and biogeography-based optimization (BBO) to propose a hybrid algorithm termed biogeography-based particle swarm optimization (BPSO) which could make a large number of elites effective in searching optima. In this algorithm, the whole population is split into several subgroups; BBO is employed to search within each subgroup and PSO for the global search. Since not all the population is used in PSO, this structure overcomes the premature convergence in the original PSO. Time complexity analysis shows that the novel algorithm does not increase the time consumption. Fourteen numerical benchmarks and four engineering problems with constraints are used to test the BPSO. To better deal with constraints, a fuzzy strategy for the number of elites is investigated. The simulation results validate the feasibility and effectiveness of the proposed algorithm.

  15. Production optimization of invertase by Lactobacillus brevis Mm-6 and its immobilization on alginate beads.

    PubMed

    Awad, Ghada E A; Amer, Hassan; El-Gammal, Eman W; Helmy, Wafaa A; Esawy, Mona A; Elnashar, Magdy M M

    2013-04-02

    A sequential optimization strategy, based on statistical experimental designs, was employed to enhance the production of invertase by Lactobacillus brevis Mm-6 isolated from breast milk. First, a 2-level Plackett-Burman design was applied to screen the bioprocess parameters that significantly influence the invertase production. The second optimization step was performed using fractional factorial design in order to optimize the amounts of variables have the highest positive significant effect on the invertase production. A maximal enzyme activity of 1399U/ml was more than five folds the activity obtained using the basal medium. Invertase was immobilized onto grafted alginate beads to improve the enzyme's stability. Immobilization process increased the operational temperature from 30 to 60°C compared to the free enzyme. The reusability test proved the durability of the grafted alginate beads for 15 cycles with retention of 100% of the immobilized enzyme activity to be more convenient for industrial uses. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Optimal use of resources structures home ranges and spatial distribution of black bears

    USGS Publications Warehouse

    Mitchell, M.S.; Powell, R.A.

    2007-01-01

    Research has shown that territories of animals are economical. Home ranges should be similarly efficient with respect to spatially distributed resources and this should structure their distribution on a landscape, although neither has been demonstrated empirically. To test these hypotheses, we used home range models that optimize resource use according to resource-maximizing and area-minimizing strategies to evaluate the home ranges of female black bears, Ursus americanus, living in the southern Appalachian Mountains. We tested general predictions of our models using 104 home ranges of adult female bears studied in the Pisgah Bear Sanctuary, North Carolina, U.S.A., from 1981 to 2001. We also used our models to estimate home ranges for each real home range under a variety of strategies and constraints and compared similarity of simulated to real home ranges. We found that home ranges of female bears were efficient with respect to the spatial distribution of resources and were best explained by an area-minimizing strategy with moderate resource thresholds and low levels of resource depression. Although resource depression probably influenced the spatial distribution of home ranges on the landscape, levels of resource depression were too low to quantify accurately. Home ranges of lactating females had higher resource thresholds and were more susceptible to resource depression than those of breeding females. We conclude that home ranges of animals, like territories, are economical with respect to resources, and that resource depression may be the mechanism behind ideal free or ideal preemptive distributions on complex, heterogeneous landscapes. ?? 2007 The Association for the Study of Animal Behaviour.

  17. Comparison of optimization algorithms for the slow shot phase in HPDC

    NASA Astrophysics Data System (ADS)

    Frings, Markus; Berkels, Benjamin; Behr, Marek; Elgeti, Stefanie

    2018-05-01

    High-pressure die casting (HPDC) is a popular manufacturing process for aluminum processing. The slow shot phase in HPDC is the first phase of this process. During this phase, the molten metal is pushed towards the cavity under moderate plunger movement. The so-called shot curve describes this plunger movement. A good design of the shot curve is important to produce high-quality cast parts. Three partially competing process goals characterize the slow shot phase: (1) reducing air entrapment, (2) avoiding temperature loss, and (3) minimizing oxide caused by the air-aluminum contact. Due to the rough process conditions with high pressure and temperature, it is hard to design the shot curve experimentally. There exist a few design rules that are based on theoretical considerations. Nevertheless, the quality of the shot curve design still depends on the experience of the machine operator. To improve the shot curve it seems to be natural to use numerical optimization. This work compares different optimization strategies for the slow shot phase optimization. The aim is to find the best optimization approach on a simple test problem.

  18. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  19. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  20. [Decision tree and cost-benefit analysis on strategies related to preventing maternal-infantile transmission of hepatitis B virus infection].

    PubMed

    Shi, Guo; Zhang, Shun-xiang

    2013-03-01

    To synthesize relevant data and to analyze the benefit-cost ratio on strategies related to preventing the maternal-infantile transmission of hepatitis B virus infection and to explore the optimal strategy. A decision tree model was constructed according to the strategies of hepatitis B immunization and a Markov model was conducted to simulate the complex disease progress after HBV infection. Parameters in the models were drawn from meta-analysis and information was collected from field study and review of literature. Economic evaluation was performed to calculate costs, benefit, and the benefit-cost ratio. Sensitivity analysis was also conducted and a tornado graph was drawn. In view of the current six possible strategies in preventing maternal-infantile transmission of hepatitis B virus infection, a multi-stage decision tree model was constructed to screen hepatitis B surface antigen (HBsAg) or screen for HBsAg then hepatitis B e antigen (HBeAg). Dose and the number of injections of HBIG and hepatitis B vaccine were taken into consideration in the model. All the strategies were considered to be cost-saving, while the strategy of screening for HBsAg and then offering hepatitis B vaccine of 10 µg×3 for all neonates with hepatitis B immunoglobulin (HBIG) of 100 IU×1 for the neonates born to mothers who tested positive for HBsAg appeared with most cost-saving. In the strategies, the benefit-cost ratio of using 100 IU HBIG was similar to 200 IU HBIG, and one shot of HBIG was superior to two shots. from sensitivity analysis suggested that the rates of immunization and the efficacy of the strategy in preventing maternal-infantile transmission were the main sensitive variables in the model. The passive-active immune-prophylaxis strategy that using 10 µg hepatitis B vaccine combined with 100 IU HBIG seemed to be the optimal strategy in preventing maternal-infantile transmission, while the rates of immunization and the efficacy of the strategy played the key roles in choosing the ideal strategy.

  1. Dynamic optimal strategies in transboundary pollution game under learning by doing

    NASA Astrophysics Data System (ADS)

    Chang, Shuhua; Qin, Weihua; Wang, Xinyu

    2018-01-01

    In this paper, we present a transboundary pollution game, in which emission permits trading and pollution abatement costs under learning by doing are considered. In this model, the abatement cost mainly depends on the level of pollution abatement and the experience of using pollution abatement technology. We use optimal control theory to investigate the optimal emission paths and the optimal pollution abatement strategies under cooperative and noncooperative games, respectively. Additionally, the effects of parameters on the results have been examined.

  2. Strategies for sustainable management of renewable resources during environmental change.

    PubMed

    Lindkvist, Emilie; Ekeberg, Örjan; Norberg, Jon

    2017-03-15

    As a consequence of global environmental change, management strategies that can deal with unexpected change in resource dynamics are becoming increasingly important. In this paper we undertake a novel approach to studying resource growth problems using a computational form of adaptive management to find optimal strategies for prevalent natural resource management dilemmas. We scrutinize adaptive management, or learning-by-doing, to better understand how to simultaneously manage and learn about a system when its dynamics are unknown. We study important trade-offs in decision-making with respect to choosing optimal actions (harvest efforts) for sustainable management during change. This is operationalized through an artificially intelligent model where we analyze how different trends and fluctuations in growth rates of a renewable resource affect the performance of different management strategies. Our results show that the optimal strategy for managing resources with declining growth is capable of managing resources with fluctuating or increasing growth at a negligible cost, creating in a management strategy that is both efficient and robust towards future unknown changes. To obtain this strategy, adaptive management should strive for: high learning rates to new knowledge, high valuation of future outcomes and modest exploration around what is perceived as the optimal action. © 2017 The Author(s).

  3. Cyber War Game in Temporal Networks

    PubMed Central

    Cho, Jin-Hee; Gao, Jianxi

    2016-01-01

    In a cyber war game where a network is fully distributed and characterized by resource constraints and high dynamics, attackers or defenders often face a situation that may require optimal strategies to win the game with minimum effort. Given the system goal states of attackers and defenders, we study what strategies attackers or defenders can take to reach their respective system goal state (i.e., winning system state) with minimum resource consumption. However, due to the dynamics of a network caused by a node’s mobility, failure or its resource depletion over time or action(s), this optimization problem becomes NP-complete. We propose two heuristic strategies in a greedy manner based on a node’s two characteristics: resource level and influence based on k-hop reachability. We analyze complexity and optimality of each algorithm compared to optimal solutions for a small-scale static network. Further, we conduct a comprehensive experimental study for a large-scale temporal network to investigate best strategies, given a different environmental setting of network temporality and density. We demonstrate the performance of each strategy under various scenarios of attacker/defender strategies in terms of win probability, resource consumption, and system vulnerability. PMID:26859840

  4. Economic optimization of natural hazard protection - conceptual study of existing approaches

    NASA Astrophysics Data System (ADS)

    Spackova, Olga; Straub, Daniel

    2013-04-01

    Risk-based planning of protection measures against natural hazards has become a common practice in many countries. The selection procedure aims at identifying an economically efficient strategy with regard to the estimated costs and risk (i.e. expected damage). A correct setting of the evaluation methodology and decision criteria should ensure an optimal selection of the portfolio of risk protection measures under a limited state budget. To demonstrate the efficiency of investments, indicators such as Benefit-Cost Ratio (BCR), Marginal Costs (MC) or Net Present Value (NPV) are commonly used. However, the methodologies for efficiency evaluation differ amongst different countries and different hazard types (floods, earthquakes etc.). Additionally, several inconsistencies can be found in the applications of the indicators in practice. This is likely to lead to a suboptimal selection of the protection strategies. This study provides a general formulation for optimization of the natural hazard protection measures from a socio-economic perspective. It assumes that all costs and risks can be expressed in monetary values. The study regards the problem as a discrete hierarchical optimization, where the state level sets the criteria and constraints, while the actual optimization is made on the regional level (towns, catchments) when designing particular protection measures and selecting the optimal protection level. The study shows that in case of an unlimited budget, the task is quite trivial, as it is sufficient to optimize the protection measures in individual regions independently (by minimizing the sum of risk and cost). However, if the budget is limited, the need for an optimal allocation of resources amongst the regions arises. To ensure this, minimum values of BCR or MC can be required by the state, which must be achieved in each region. The study investigates the meaning of these indicators in the optimization task at the conceptual level and compares their suitability. To illustrate the theoretical findings, the indicators are tested on a hypothetical example of five regions with different risk levels. Last but not least, political and societal aspects and limitations in the use of the risk-based optimization framework are discussed.

  5. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    NASA Astrophysics Data System (ADS)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill-delineated fractions of protection zones. Within an illustrative simplified 2D synthetic test case, we demonstrate our concept, involving synthetic transmissivity and head measurements for conditioning. We demonstrate the worth of optimally collected data in the context of protection zone delineation by assessing the reduced areal demand of delineated area at user-specified risk acceptance level. Results indicate that, thanks to optimally collected data, risk-aware delineation can be made at low to moderate additional costs compared to conventional delineation strategies.

  6. Electric vehicle energy management system

    NASA Astrophysics Data System (ADS)

    Alaoui, Chakib

    This thesis investigates and analyzes novel strategies for the optimum energy management of electric vehicles (EVs). These are aimed to maximize the useful life of the EV batteries and make the EV more practical in order to increase its acceptability to market. The first strategy concerns the right choice of the batteries for the EV according to the user's driving habits, which may vary. Tests conducted at the University of Massachusetts Lowell battery lab show that the batteries perform differently from one manufacturer to the other. The second strategy was to investigate the fast chargeability of different batteries, which leads to reduce the time needed to recharge the EV battery pack. Tests were conducted again to prove that only few battery types could be fast charged. Test data were used to design a fast battery charger that could be installed in an EV charging station. The third strategy was the design, fabrication and application of an Electric Vehicle Diagnostic and Rejuvenation System (EVDRS). This system is based on Mosfet Controlled Thyristors (MCTs). It is capable of quickly identifying any failing battery(s) within the EV pack and rejuvenating the whole battery pack without dismantling them and unloading them. A novel algorithm to rejuvenate Electric Vehicle Sealed Lead Acid Batteries is described. This rejuvenation extends the useful life of the batteries and makes the EV more competitive. The fourth strategy was to design a thermal management system for EV, which is crucial to the safe operation, and the achievement of normal/optimal performance of, electric vehicle (EV) batteries. A novel approach for EV thermal management, based on Pettier-Effect heat pumps, was designed, fabricated and tested in EV. It shows the application of this type of technology for thermal management of EVs.

  7. Optimization-based power management of hybrid power systems with applications in advanced hybrid electric vehicles and wind farms with battery storage

    NASA Astrophysics Data System (ADS)

    Borhan, Hoseinali

    Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).

  8. Optimal management of non-Markovian biological populations

    USGS Publications Warehouse

    Williams, B.K.

    2007-01-01

    Wildlife populations typically are described by Markovian models, with population dynamics influenced at each point in time by current but not previous population levels. Considerable work has been done on identifying optimal management strategies under the Markovian assumption. In this paper we generalize this work to non-Markovian systems, for which population responses to management are influenced by lagged as well as current status and/or controls. We use the maximum principle of optimal control theory to derive conditions for the optimal management such a system, and illustrate the effects of lags on the structure of optimal habitat strategies for a predator-prey system.

  9. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  10. Feature extraction and selection strategies for automated target recognition

    NASA Astrophysics Data System (ADS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-04-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  11. A cognitive prosthesis for complex decision-making.

    PubMed

    Tremblay, Sébastien; Gagnon, Jean-François; Lafond, Daniel; Hodgetts, Helen M; Doiron, Maxime; Jeuniaux, Patrick P J M H

    2017-01-01

    While simple heuristics can be ecologically rational and effective in naturalistic decision making contexts, complex situations require analytical decision making strategies, hypothesis-testing and learning. Sub-optimal decision strategies - using simplified as opposed to analytic decision rules - have been reported in domains such as healthcare, military operational planning, and government policy making. We investigate the potential of a computational toolkit called "IMAGE" to improve decision-making by developing structural knowledge and increasing understanding of complex situations. IMAGE is tested within the context of a complex military convoy management task through (a) interactive simulations, and (b) visualization and knowledge representation capabilities. We assess the usefulness of two versions of IMAGE (desktop and immersive) compared to a baseline. Results suggest that the prosthesis helped analysts in making better decisions, but failed to increase their structural knowledge about the situation once the cognitive prosthesis is removed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. The Changing Landscape of Genetic Testing for Inherited Breast Cancer Predisposition.

    PubMed

    Afghahi, Anosheh; Kurian, Allison W

    2017-05-01

    The advent of multiple-gene germline panel testing has led to significant advances in hereditary breast and ovarian cancer risk assessment. These include guideline-specific cancer risk management recommendations for patients and their families, such as screening with breast magnetic resonance imaging and risk-reducing surgeries, which have the potential to reduce substantially the morbidity and mortality associated with a hereditary cancer predisposition. However, controversy remains about the clinical validity and actionability of genetic testing in a broader patient population. We discuss events leading to the wider availability of commercialized multiple-gene germline panel testing, the recent data that support using this powerful tool to improve cancer risk assessment and reduction strategies, and remaining challenges to clinical optimization of this new genetic technology.

  13. A biologically inspired meta-control navigation system for the Psikharpax rat robot.

    PubMed

    Caluwaerts, K; Staffa, M; N'Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M

    2012-06-01

    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics.

  14. Optimal control of anthracnose using mixed strategies.

    PubMed

    Fotsa Mbogne, David Jaures; Thron, Christopher

    2015-11-01

    In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Research on optimal path planning algorithm of task-oriented optical remote sensing satellites

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Xu, Shengli; Liu, Fengjing; Yuan, Jingpeng

    2015-08-01

    GEO task-oriented optical remote sensing satellite, is very suitable for long-term continuous monitoring and quick access to imaging. With the development of high resolution optical payload technology and satellite attitude control technology, GEO optical remote sensing satellites will become an important developing trend for aerospace remote sensing satellite in the near future. In the paper, we focused on GEO optical remote sensing satellite plane array stare imaging characteristics and real-time leading mission of earth observation mode, targeted on satisfying needs of the user with the minimum cost of maneuver, and put forward the optimal path planning algorithm centered on transformation from geographic coordinate space to Field of plane, and finally reduced the burden of the control system. In this algorithm, bounded irregular closed area on the ground would be transformed based on coordinate transformation relations in to the reference plane for field of the satellite payload, and then using the branch and bound method to search for feasible solutions, cutting off the non-feasible solution in the solution space based on pruning strategy; and finally trimming some suboptimal feasible solutions based on the optimization index until a feasible solution for the global optimum. Simulation and visualization presentation software testing results verified the feasibility and effectiveness of the strategy.

  16. Design of limited-stop service based on the degree of unbalance of passenger demand

    PubMed Central

    2018-01-01

    This paper presents a limited-stop service for a bus fleet to meet the unbalanced demand of passengers on a bus route and to improve the transit service of the bus route. This strategy includes two parts: a degree assessment of unbalanced passenger demand and an optimization of the limited-stop service. The degree assessment of unbalanced passenger demand, which is based on the different passenger demand between stations and the unbalance of passengers within the station, is used to judge whether implementing the limited-stop service is necessary for a bus route. The optimization of limited-stop service considers the influence of stop skipping action and bus capacity on the left-over passengers to determine the proper skipping stations for the bus fleet serving the entire route by minimizing both the waiting time and in-vehicle time of passengers and the running time of vehicles. A solution algorithm based on genetic algorithm is also presented to evaluate the degree of unbalanced passenger demand and optimize the limited-stop scheme. Then, the proper strategy is tested on a bus route in Changchun city of China. The threshold of degree assessment of unbalanced passenger demand can be calibrated and adapted to different passenger demands. PMID:29505585

  17. Design of limited-stop service based on the degree of unbalance of passenger demand.

    PubMed

    Zhang, Hu; Zhao, Shuzhi; Liu, Huasheng; Liang, Shidong

    2018-01-01

    This paper presents a limited-stop service for a bus fleet to meet the unbalanced demand of passengers on a bus route and to improve the transit service of the bus route. This strategy includes two parts: a degree assessment of unbalanced passenger demand and an optimization of the limited-stop service. The degree assessment of unbalanced passenger demand, which is based on the different passenger demand between stations and the unbalance of passengers within the station, is used to judge whether implementing the limited-stop service is necessary for a bus route. The optimization of limited-stop service considers the influence of stop skipping action and bus capacity on the left-over passengers to determine the proper skipping stations for the bus fleet serving the entire route by minimizing both the waiting time and in-vehicle time of passengers and the running time of vehicles. A solution algorithm based on genetic algorithm is also presented to evaluate the degree of unbalanced passenger demand and optimize the limited-stop scheme. Then, the proper strategy is tested on a bus route in Changchun city of China. The threshold of degree assessment of unbalanced passenger demand can be calibrated and adapted to different passenger demands.

  18. Optimal estimation of two-qubit pure-state entanglement

    NASA Astrophysics Data System (ADS)

    Acín, Antonio; Tarrach, Rolf; Vidal, Guifré

    2000-06-01

    We present optimal measuring strategies for an estimation of the entanglement of unknown two-qubit pure states and of the degree of mixing of unknown single-qubit mixed states, of which N identical copies are available. The most general measuring strategies are considered in both situations, to conclude in the first case that a local, although collective, measurement suffices to estimate entanglement, a nonlocal property, optimally.

  19. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  20. Prevalence of gestational diabetes mellitus based on various screening strategies in western Kenya: a prospective comparison of point of care diagnostic methods.

    PubMed

    Pastakia, Sonak D; Njuguna, Benson; Onyango, Beryl Ajwang'; Washington, Sierra; Christoffersen-Deb, Astrid; Kosgei, Wycliffe K; Saravanan, Ponnusamy

    2017-07-14

    Early diagnosis of gestational diabetes mellitus (GDM) is crucial to prevent short term delivery risks and long term effects such as cardiovascular and metabolic diseases in the mother and infant. Diagnosing GDM in Sub-Saharan Africa (SSA) however, remains sub-optimal due to associated logistical and cost barriers for resource-constrained populations. A cost-effective strategy to screen for GDM in such settings are therefore urgently required. We conducted this study to determine the prevalence of gestational diabetes mellitus (GDM) and assess utility of various GDM point of care (POC) screening strategies in a resource-constrained setting. Eligible women aged ≥18 years, and between 24 and 32 weeks of a singleton pregnancy, prospectively underwent testing over two days. On day 1, a POC 1-h 50 g glucose challenge test (GCT) and a POC glycated hemoglobin (HbA1c) was assessed. On day 2, fasting blood glucose, 1-h and 2-h 75 g oral glucose tolerance test (OGTT) were determined using both venous and POC tests, along with a venous HbA1c. The International Association of Diabetes in Pregnancy Study Group (IADPSG) criteria was used to diagnose GDM. GDM prevalence was reported with 95% confidence interval (CI). Specificity, sensitivity, positive predictive value, and negative predictive value of the various POC testing strategies were determined using IADPSG testing as the standard reference. Six hundred-sixteen eligible women completed testing procedures. GDM was diagnosed in 18 women, a prevalence of 2.9% (95% CI, 1.57% - 4.23%). Compared to IADPSG testing, POC IADPSG had a sensitivity and specificity of 55.6% and 90.6% respectively while that of POC 1-h 50 g GCT (using a diagnostic cut-off of ≥7.2 mmol/L [129.6 mg/dL]) was 55.6% and 63.9%. All other POC tests assessed showed poor sensitivity. POC screening strategies though feasible, showed poor sensitivity for GDM detection in our resource-constrained population of low GDM prevalence. Studies to identify sensitive and specific POC GDM screening strategies using adverse pregnancy outcomes as end points are required. Clinical trials.gov : NCT02978807 , Registered 29 November 2016.

  1. A Unique Opportunity to Test Whether Cell Fusion is a Mechanism of Breast Cancer Metastasis

    DTIC Science & Technology

    2013-07-01

    populations. Last cycle we optimized electroporation conditions for T47D and human mesenchymal stem cell populations and this cycle we have improved our...specific receptor-ligand interactions necessary for cell fusion, to produce a target for drug therapy. Post-fusion events might also be investigated...new tools for the study of the complex processes of cell fusion. The inducible bipartite nature of these strategies assures the accurate

  2. Optimism, Positive and Negative Affect, and Goal Adjustment Strategies: Their Relationship to Activity Patterns in Patients with Chronic Musculoskeletal Pain.

    PubMed

    Esteve, Rosa; López-Martínez, Alicia E; Peters, Madelon L; Serrano-Ibáñez, Elena R; Ruiz-Párraga, Gema T; Ramírez-Maestre, Carmen

    2018-01-01

    Activity patterns are the product of pain and of the self-regulation of current goals in the context of pain. The aim of this study was to investigate the association between goal management strategies and activity patterns while taking into account the role of optimism/pessimism and positive/negative affect. Two hundred and thirty-seven patients with chronic musculoskeletal pain filled out questionnaires on optimism, positive and negative affect, pain intensity, and the activity patterns they employed in dealing with their pain. Questionnaires were also administered to assess their general goal management strategies: goal persistence, flexible goal adjustment, and disengagement and reengagement with goals. Structural equation modelling showed that higher levels of optimism were related to persistence, flexible goal management, and commitment to new goals. These strategies were associated with higher positive affect, persistence in finishing tasks despite pain, and infrequent avoidance behaviour in the presence or anticipation of pain. The strategies used by the patients with chronic musculoskeletal pain to manage their life goals are related to their activity patterns.

  3. On optimal strategies in event-constrained differential games

    NASA Technical Reports Server (NTRS)

    Heymann, M.; Rajan, N.; Ardema, M.

    1985-01-01

    Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.

  4. Strategies for the Optimization of Natural Leads to Anticancer Drugs or Drug Candidates

    PubMed Central

    Xiao, Zhiyan; Morris-Natschke, Susan L.; Lee, Kuo-Hsiung

    2015-01-01

    Natural products have made significant contribution to cancer chemotherapy over the past decades and remain an indispensable source of molecular and mechanistic diversity for anticancer drug discovery. More often than not, natural products may serve as leads for further drug development rather than as effective anticancer drugs by themselves. Generally, optimization of natural leads into anticancer drugs or drug candidates should not only address drug efficacy, but also improve ADMET profiles and chemical accessibility associated with the natural leads. Optimization strategies involve direct chemical manipulation of functional groups, structure-activity relationship-directed optimization and pharmacophore-oriented molecular design based on the natural templates. Both fundamental medicinal chemistry principles (e.g., bio-isosterism) and state-of-the-art computer-aided drug design techniques (e.g., structure-based design) can be applied to facilitate optimization efforts. In this review, the strategies to optimize natural leads to anticancer drugs or drug candidates are illustrated with examples and described according to their purposes. Furthermore, successful case studies on lead optimization of bioactive compounds performed in the Natural Products Research Laboratories at UNC are highlighted. PMID:26359649

  5. Optimal race strategy for a 200-m flying sprint in a human-powered vehicle: A case study of a world-record attempt.

    PubMed

    de Koning, Jos J; van der Zweep, Cees-Jan; Cornelissen, Jesper; Kuiper, Bouke

    2013-03-01

    Optimal pacing strategy was determined for breaking the world speed record on a human-powered vehicle (HPV) using an energy-flow model in which the rider's physical capacities, the vehicle's properties, and the environmental conditions were included. Power data from world-record attempts were compared with data from the model, and race protocols were adjusted to the results from the model. HPV performance can be improved by using an energy-flow model for optimizing race strategy. A biphased in-run followed by a sprint gave best results.

  6. Convexity of Ruin Probability and Optimal Dividend Strategies for a General Lévy Process

    PubMed Central

    Yuen, Kam Chuen; Shen, Ying

    2015-01-01

    We consider the optimal dividends problem for a company whose cash reserves follow a general Lévy process with certain positive jumps and arbitrary negative jumps. The objective is to find a policy which maximizes the expected discounted dividends until the time of ruin. Under appropriate conditions, we use some recent results in the theory of potential analysis of subordinators to obtain the convexity properties of probability of ruin. We present conditions under which the optimal dividend strategy, among all admissible ones, takes the form of a barrier strategy. PMID:26351655

  7. The Evolution of Generosity in the Ultimatum Game.

    PubMed

    Hintze, Arend; Hertwig, Ralph

    2016-09-28

    When humans fail to make optimal decisions in strategic games and economic gambles, researchers typically try to explain why that behaviour is biased. To this end, they search for mechanisms that cause human behaviour to deviate from what seems to be the rational optimum. But perhaps human behaviour is not biased; perhaps research assumptions about the optimality of strategies are incomplete. In the one-shot anonymous symmetric ultimatum game (UG), humans fail to play optimally as defined by the Nash equilibrium. However, the distinction between kin and non-kin-with kin detection being a key evolutionary adaption-is often neglected when deriving the "optimal" strategy. We computationally evolved strategies in the UG that were equipped with an evolvable probability to discern kin from non-kin. When an opponent was not kin, agents evolved strategies that were similar to those used by humans. We therefore conclude that the strategy humans play is not irrational. The deviation between behaviour and the Nash equilibrium may rather be attributable to key evolutionary adaptations, such as kin detection. Our findings further suggest that social preference models are likely to capture mechanisms that permit people to play optimally in an evolutionary context. Once this context is taken into account, human behaviour no longer appears irrational.

  8. Two-locus diseas models with two marker loci: The power of affected-sib-pair tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knapp, M.; Seuchter, S.A.; Bauer, M.P.

    1994-11-01

    Recently, Schork et al. found that two-trait-locus, two-marker-locus (parametric) linkage analysis can provide substantially more linkage information than can standard one-trait-locus, one-marker-locus methods. However, because of the increased burden of computation, Schork et al. do not expect that their approach will be applied in an initial genome scan. Further, the specification of a suitable two-locus segregation model can be crucial. Affected-sib-pair tests are computationally simple and do not require an explicit specification of the disease model. In the past, however, these tests mainly have been applied to data with a single marker locus. Here, we consider sib-pair tests that makemore » it possible to analyze simultaneously two marker loci. The power of these tests is investigated for different (epistatic and heterogeneous) two-trait-locus models, each trait locus being linked to one of the marker loci. We compare these tests both with the test that is optimal for a certain model and with the strategy that analyzes each marker locus separately. The results indicate that a straightforward extension of the well-known mean test for two marker loci can be much more powerful than single-marker-locus analysis and that its power is only slightly inferior to the power of the optimal test. 21 refs., 5 figs., 2 tabs.« less

  9. Difficulty of distinguishing product states locally

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.

    2017-01-01

    Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.

  10. Opitmal Platform Strategies in the Smartphone Market

    NASA Astrophysics Data System (ADS)

    Unno, Masaru; Xu, Hua

    In a smartphone market, smartphone makers encourage smartphone application providers (AP) to create more popular smartphone applications through making a revenue-sharing contract with AP and providing application-purchasing support to end users. In this paper, we study revenue-sharing and application-purchasing support problem between a risk-averse smartphone maker and a smartphone application provider. The problem is formulated as the smartphone makers's risk-sensitive stochastic control problem. The sufficient conditions for the existence of the optimal revenue-sharing strategy, the optimal application-purchasing support strategy and the incentive compatible effort recommended to AP are obtained. The effects of the smartphone makers's risk-sensitivity on the optimal strategies are also discussed. A numerical example is solved to show the computation aspects of the problem.

  11. Aerodynamic Shape Optimization Using Hybridized Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2003-01-01

    An aerodynamic shape optimization method that uses an evolutionary algorithm known at Differential Evolution (DE) in conjunction with various hybridization strategies is described. DE is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Various hybridization strategies for DE are explored, including the use of neural networks as well as traditional local search methods. A Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the hybrid DE optimizer. The method is implemented on distributed parallel computers so that new designs can be obtained within reasonable turnaround times. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. (The final paper will include at least one other aerodynamic design application). The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated.

  12. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE PAGES

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    2018-01-28

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  13. Synthesizing epidemiological and economic optima for control of immunizing infections.

    PubMed

    Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T

    2011-08-23

    Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.

  14. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  15. CON4EI: CONsortium for in vitro Eye Irritation testing strategy - EpiOcular™ time-to-toxicity (EpiOcular ET-50) protocols for hazard identification and labelling of eye irritating chemicals.

    PubMed

    Kandarova, Helena; Letasiova, Silvia; Adriaens, Els; Guest, Robert; Willoughby, Jamin A; Drzewiecka, Agnieszka; Gruszka, Katarzyna; Alépée, Nathalie; Verstraelen, Sandra; Van Rompay, An R

    2018-06-01

    Assessment of acute eye irritation potential is part of the international regulatory requirements for testing of chemicals. The objective of the CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was to develop tiered testing strategies for eye irritation assessment for all drivers of classification. A set of 80 reference chemicals (38 liquids and 42 solids) was tested with eight different alternative methods. Here, the results obtained with reconstructed human cornea-like epithelium (RhCE) EpiOcular™ in the EpiOcular time-to-toxicity Tests (Neat and Dilution ET-50 protocols) are presented. The primary aim of this study was to evaluate whether test methods can discriminate chemicals not requiring classification for serious eye damage/eye irritancy (No Category) from chemicals requiring classification and labelling for Category 1 and Category 2. In addition, the predictive capacity in terms of in vivo drivers of classification was investigated. The chemicals were tested in two independent runs by MatTek In Vitro Life Science Laboratories. Results of this study demonstrate very high specificity of both test protocols. With the existing prediction models described in the SOPs, the specificity of the Neat and Dilution method was 87% and 100%, respectively. The Dilution method was able to correctly predicting 66% of GHS Cat 2 chemicals, however, prediction of GHS Cat 1 chemicals was only 47%-55% using the current protocols. In order to achieve optimal prediction for all three classes, a testing strategy was developed which combines the most predictive time-points of both protocols and for tests liquids and solids separately. Using this new testing strategy, the sensitivity for predicting GHS Cat 1 and GHS Cat 2 chemicals was 73% and 64%, respectively and the very high specificity of 97% was maintained. None of the Cat 1 chemicals was underpredicted as GHS No Category. Further combination of the EpiOcular time-to-toxicity protocols with other validated in vitro systems evaluated in this project, should enable significant reduction and even possible replacement of the animal tests for the final assessment of the irritation potential in all of the GHS classes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Optimizing the well pumping rate and its distance from a stream

    NASA Astrophysics Data System (ADS)

    Abdel-Hafez, M. H.; Ogden, F. L.

    2008-12-01

    Both ground water and surface water are very important component of the water resources. Since they are coupled systems in riparian areas, management strategies that neglect interactions between them penalize senior surface water rights to the benefit of junior ground water rights holders in the prior appropriation rights system. Water rights managers face a problem in deciding which wells need to be shut down and when, in the case of depleted stream flow. A simulation model representing a combined hypothetical aquifer and stream has been developed using MODFLOW 2000 to capture parameter sensitivity, test management strategies and guide field data collection campaigns to support modeling. An optimization approach has been applied to optimize both the well distance from the stream and the maximum pumping rate that does not affect the stream discharge downstream the pumping wells. Conjunctive management can be modeled by coupling the numerical simulation model with the optimization techniques using the response matrix technique. The response matrix can be obtained by calculating the response coefficient for each well and stream. The main assumption of the response matrix technique is that the amount of water out of the stream to the aquifer is linearly proportional to the well pumping rate (Barlow et al. 2003). The results are presented in dimensionless form, which can be used by the water managers to solve conflicts between surface water and ground water holders by making the appropriate decision to choose which well need to be shut down first.

  17. Latency reversal and viral clearance to cure HIV-1.

    PubMed

    Margolis, David M; Garcia, J Victor; Hazuda, Daria J; Haynes, Barton F

    2016-07-22

    Research toward a cure for human immunodeficiency virus type 1 (HIV-1) infection has joined prevention and treatment efforts in the global public health agenda. A major approach to HIV eradication envisions antiretroviral suppression, paired with targeted therapies to enforce the expression of viral antigen from quiescent HIV-1 genomes, and immunotherapies to clear latent infection. These strategies are targeted to lead to viral eradication--a cure for AIDS. Paired testing of latency reversal and clearance strategies has begun, but additional obstacles to HIV eradication may emerge. Nevertheless, there is reason for optimism that advances in long-acting antiretroviral therapy and HIV prevention strategies will contribute to efforts in HIV cure research and that the implementation of these efforts will synergize to markedly blunt the effect of the HIV pandemic on society. Copyright © 2016, American Association for the Advancement of Science.

  18. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2017-06-01

    Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.

  19. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  20. On the Effectiveness of Nature-Inspired Metaheuristic Algorithms for Performing Phase Equilibrium Thermodynamic Calculations

    PubMed Central

    Fateen, Seif-Eddeen K.; Bonilla-Petriciolet, Adrian

    2014-01-01

    The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430

Top