Sample records for optimization approach case

  1. Vector-model-supported approach in prostate plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less

  2. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    PubMed

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet tight dose limits. For robust optimization, the worst case dose approach was less sensitive to uncertainties than was the minmax approach for the prostate and skull-based cancer patients, whereas the minmax approach was superior for the head and neck cancer patients. The robustness of the IMPT plans was remarkably better after robust optimization than after PTV-based optimization, and the NLP-PTV-based optimization outperformed the LP-PTV-based optimization regarding robustness of clinical target volume coverage. In addition, plans generated using LP-based methods had notably fewer scanning spots than did those generated using NLP-based methods. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. Machining fixture layout optimization using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Dou, Jianping; Wang, Xingsong; Wang, Lei

    2011-05-01

    Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.

  4. A Homogenization Approach for Design and Simulation of Blast Resistant Composites

    NASA Astrophysics Data System (ADS)

    Sheyka, Michael

    Structural composites have been used in aerospace and structural engineering due to their high strength to weight ratio. Composite laminates have been successfully and extensively used in blast mitigation. This dissertation examines the use of the homogenization approach to design and simulate blast resistant composites. Three case studies are performed to examine the usefulness of different methods that may be used in designing and optimizing composite plates for blast resistance. The first case study utilizes a single degree of freedom system to simulate the blast and a reliability based approach. The first case study examines homogeneous plates and the optimal stacking sequence and plate thicknesses are determined. The second and third case studies use the homogenization method to calculate the properties of composite unit cell made of two different materials. The methods are integrated with dynamic simulation environments and advanced optimization algorithms. The second case study is 2-D and uses an implicit blast simulation, while the third case study is 3-D and simulates blast using the explicit blast method. Both case studies 2 and 3 rely on multi-objective genetic algorithms for the optimization process. Pareto optimal solutions are determined in case studies 2 and 3. Case study 3 is an integrative method for determining optimal stacking sequence, microstructure and plate thicknesses. The validity of the different methods such as homogenization, reliability, explicit blast modeling and multi-objective genetic algorithms are discussed. Possible extension of the methods to include strain rate effects and parallel computation is also examined.

  5. Interactive optimization approach for optimal impulsive rendezvous using primer vector and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin

    2010-08-01

    In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.

  6. Post Pareto optimization-A case

    NASA Astrophysics Data System (ADS)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  7. A fast optimization approach for treatment planning of volumetric modulated arc therapy.

    PubMed

    Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong

    2018-05-30

    Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.

  8. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  9. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2002-01-01

    In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.

  10. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  11. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  12. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  13. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2003-01-01

    In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.

  14. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  15. Design of Quiet Rotorcraft Approach Trajectories

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Burley, Casey L.; Boyd, D. Douglas, Jr.; Marcolini, Michael A.

    2009-01-01

    A optimization procedure for identifying quiet rotorcraft approach trajectories is proposed and demonstrated. The procedure employs a multi-objective genetic algorithm in order to reduce noise and create approach paths that will be acceptable to pilots and passengers. The concept is demonstrated by application to two different helicopters. The optimized paths are compared with one another and to a standard 6-deg approach path. The two demonstration cases validate the optimization procedure but highlight the need for improved noise prediction techniques and for additional rotorcraft acoustic data sets.

  16. A Hamiltonian approach to the planar optimization of mid-course corrections

    NASA Astrophysics Data System (ADS)

    Iorfida, E.; Palmer, P. L.; Roberts, M.

    2016-04-01

    Lawden's primer vector theory gives a set of necessary conditions that characterize the optimality of a transfer orbit, defined accordingly to the possibility of adding mid-course corrections. In this paper a novel approach is proposed where, through a polar coordinates transformation, the primer vector components decouple. Furthermore, the case when transfer, departure and arrival orbits are coplanar is analyzed using a Hamiltonian approach. This procedure leads to approximate analytic solutions for the in-plane components of the primer vector. Moreover, the solution for the circular transfer case is proven to be the Hill's solution. The novel procedure reduces the mathematical and computational complexity of the original case study. It is shown that the primer vector is independent of the semi-major axis of the transfer orbit. The case with a fixed transfer trajectory and variable initial and final thrust impulses is studied. The acquired related optimality maps are presented and analyzed and they express the likelihood of a set of trajectories to be optimal. Furthermore, it is presented which kind of requirements have to be fulfilled by a set of departure and arrival orbits to have the same profile of primer vector.

  17. Optimal Analyses for 3×n AB Games in the Worst Case

    NASA Astrophysics Data System (ADS)

    Huang, Li-Te; Lin, Shun-Shii

    The past decades have witnessed a growing interest in research on deductive games such as Mastermind and AB game. Because of the complicated behavior of deductive games, tree-search approaches are often adopted to find their optimal strategies. In this paper, a generalized version of deductive games, called 3×n AB games, is introduced. However, traditional tree-search approaches are not appropriate for solving this problem since it can only solve instances with smaller n. For larger values of n, a systematic approach is necessary. Therefore, intensive analyses of playing 3×n AB games in the worst case optimally are conducted and a sophisticated method, called structural reduction, which aims at explaining the worst situation in this game is developed in the study. Furthermore, a worthwhile formula for calculating the optimal numbers of guesses required for arbitrary values of n is derived and proven to be final.

  18. Case study: Optimizing fault model input parameters using bio-inspired algorithms

    NASA Astrophysics Data System (ADS)

    Plucar, Jan; Grunt, Onřej; Zelinka, Ivan

    2017-07-01

    We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.

  19. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  20. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  1. Automated Surgical Approach Planning for Complex Skull Base Targets: Development and Validation of a Cost Function and Semantic At-las.

    PubMed

    Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A

    2018-06-01

    Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.

  2. Focusing light through random photonic layers by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-02-01

    The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.

  3. Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett

    2004-01-01

    Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.

  4. Different approaches for centralized and decentralized water system management in multiple decision makers' problems

    NASA Astrophysics Data System (ADS)

    Anghileri, D.; Giuliani, M.; Castelletti, A.

    2012-04-01

    There is a general agreement that one of the most challenging issues related to water system management is the presence of many and often conflicting interests as well as the presence of several and independent decision makers. The traditional approach to multi-objective water systems management is a centralized management, in which an ideal central regulator coordinates the operation of the whole system, exploiting all the available information and balancing all the operating objectives. Although this approach allows to obtain Pareto-optimal solutions representing the maximum achievable benefit, it is based on assumptions which strongly limits its application in real world contexts: 1) top-down management, 2) existence of a central regulation institution, 3) complete information exchange within the system, 4) perfect economic efficiency. A bottom-up decentralized approach seems therefore to be more suitable for real case applications since different reservoir operators may maintain their independence. In this work we tested the consequences of a change in the water management approach moving from a centralized toward a decentralized one. In particular we compared three different cases: the centralized management approach, the independent management approach where each reservoir operator takes the daily release decision maximizing (or minimizing) his operating objective independently from each other, and an intermediate approach, leading to the Nash equilibrium of the associated game, where different reservoir operators try to model the behaviours of the other operators. The three approaches are demonstrated using a test case-study composed of two reservoirs regulated for the minimization of flooding in different locations. The operating policies are computed by solving one single multi-objective optimal control problem, in the centralized management approach; multiple single-objective optimization problems, i.e. one for each operator, in the independent case; using techniques related to game theory for the description of the interaction between the two operators, in the last approach. Computational results shows that the Pareto-optimal control policies obtained in the centralized approach dominate the control policies of both the two cases of decentralized management and that the so called price of anarchy increases moving toward the independent management approach. However, the Nash equilibrium solution seems to be the most promising alternative because it represents a good compromise in maximizing management efficiency without limiting the behaviours of the reservoir operators.

  5. Field-based optimal-design of an electric motor: a new sensitivity formulation

    NASA Astrophysics Data System (ADS)

    Barba, Paolo Di; Mognaschi, Maria Evelina; Lowther, David Alister; Wiak, Sławomir

    2017-12-01

    In this paper, a new approach to robust optimal design is proposed. The idea is to consider the sensitivity by means of two auxiliary criteria A and D, related to the magnitude and isotropy of the sensitivity, respectively. The optimal design of a switched-reluctance motor is considered as a case study: since the case study exhibits two design criteria, the relevant Pareto front is approximated by means of evolutionary computing.

  6. Short-term solar flare prediction using image-case-based reasoning

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Fu; Li, Fei; Zhang, Huai-Peng; Yu, Da-Ren

    2017-10-01

    Solar flares strongly influence space weather and human activities, and their prediction is highly complex. The existing solutions such as data based approaches and model based approaches have a common shortcoming which is the lack of human engagement in the forecasting process. An image-case-based reasoning method is introduced to achieve this goal. The image case library is composed of SOHO/MDI longitudinal magnetograms, the images from which exhibit the maximum horizontal gradient, the length of the neutral line and the number of singular points that are extracted for retrieving similar image cases. Genetic optimization algorithms are employed for optimizing the weight assignment for image features and the number of similar image cases retrieved. Similar image cases and prediction results derived by majority voting for these similar image cases are output and shown to the forecaster in order to integrate his/her experience with the final prediction results. Experimental results demonstrate that the case-based reasoning approach has slightly better performance than other methods, and is more efficient with forecasts improved by humans.

  7. An approach for multi-objective optimization of vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Koulocheris, D.; Papaioannou, G.; Christodoulou, D.

    2017-10-01

    In this paper, a half car model of with nonlinear suspension systems is selected in order to study the vertical vibrations and optimize its suspension system with respect to ride comfort and road holding. A road bump was used as road profile. At first, the optimization problem is solved with the use of Genetic Algorithms with respect to 6 optimization targets. Then the k - ɛ optimization method was implemented to locate one optimum solution. Furthermore, an alternative approach is presented in this work: the previous optimization targets are separated in main and supplementary ones, depending on their importance in the analysis. The supplementary targets are not crucial to the optimization but they could enhance the main objectives. Thus, the problem was solved again using Genetic Algorithms with respect to the 3 main targets of the optimization. Having obtained the Pareto set of solutions, the k - ɛ optimality method was implemented for the 3 main targets and the supplementary ones, evaluated by the simulation of the vehicle model. The results of both cases are presented and discussed in terms of convergence of the optimization and computational time. The optimum solutions acquired from both cases are compared based on performance metrics as well.

  8. Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.

    PubMed

    Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M

    2017-01-01

    In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.

  9. Robust optimization of front members in a full frontal car impact

    NASA Astrophysics Data System (ADS)

    Aspenberg (né Lönn), David; Jergeus, Johan; Nilsson, Larsgunnar

    2013-03-01

    In the search for lightweight automobile designs, it is necessary to assure that robust crashworthiness performance is achieved. Structures that are optimized to handle a finite number of load cases may perform poorly when subjected to various dispersions. Thus, uncertainties must be accounted for in the optimization process. This article presents an approach to optimization where all design evaluations include an evaluation of the robustness. Metamodel approximations are applied both to the design space and the robustness evaluations, using artifical neural networks and polynomials, respectively. The features of the robust optimization approach are displayed in an analytical example, and further demonstrated in a large-scale design example of front side members of a car. Different optimization formulations are applied and it is shown that the proposed approach works well. It is also concluded that a robust optimization puts higher demands on the finite element model performance than normally.

  10. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  11. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  12. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  13. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  14. Uncertainty-Based Multi-Objective Optimization of Groundwater Remediation Design

    NASA Astrophysics Data System (ADS)

    Singh, A.; Minsker, B.

    2003-12-01

    Management of groundwater contamination is a cost-intensive undertaking filled with conflicting objectives and substantial uncertainty. A critical source of this uncertainty in groundwater remediation design problems comes from the hydraulic conductivity values for the aquifer, upon which the prediction of flow and transport of contaminants are dependent. For a remediation solution to be reliable in practice it is important that it is robust over the potential error in the model predictions. This work focuses on incorporating such uncertainty within a multi-objective optimization framework, to get reliable as well as Pareto optimal solutions. Previous research has shown that small amounts of sampling within a single-objective genetic algorithm can produce highly reliable solutions. However with multiple objectives the noise can interfere with the basic operations of a multi-objective solver, such as determining non-domination of individuals, diversity preservation, and elitism. This work proposes several approaches to improve the performance of noisy multi-objective solvers. These include a simple averaging approach, taking samples across the population (which we call extended averaging), and a stochastic optimization approach. All the approaches are tested on standard multi-objective benchmark problems and a hypothetical groundwater remediation case-study; the best-performing approach is then tested on a field-scale case at Umatilla Army Depot.

  15. The Improvement of Particle Swarm Optimization: a Case Study of Optimal Operation in Goupitan Reservoir

    NASA Astrophysics Data System (ADS)

    Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui

    2018-02-01

    Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.

  16. Aerodynamic Shape Optimization Design of Wing-Body Configuration Using a Hybrid FFD-RBF Parameterization Approach

    NASA Astrophysics Data System (ADS)

    Liu, Yuefeng; Duan, Zhuoyi; Chen, Song

    2017-10-01

    Aerodynamic shape optimization design aiming at improving the efficiency of an aircraft has always been a challenging task, especially when the configuration is complex. In this paper, a hybrid FFD-RBF surface parameterization approach has been proposed for designing a civil transport wing-body configuration. This approach is simple and efficient, with the FFD technique used for parameterizing the wing shape and the RBF interpolation approach used for handling the wing body junction part updating. Furthermore, combined with Cuckoo Search algorithm and Kriging surrogate model with expected improvement adaptive sampling criterion, an aerodynamic shape optimization design system has been established. Finally, the aerodynamic shape optimization design on DLR F4 wing-body configuration has been carried out as a study case, and the result has shown that the approach proposed in this paper is of good effectiveness.

  17. Design of optimal groundwater remediation systems under flexible environmental-standard constraints.

    PubMed

    Fan, Xing; He, Li; Lu, Hong-Wei; Li, Jing

    2015-01-01

    In developing optimal groundwater remediation strategies, limited effort has been exerted to solve the uncertainty in environmental quality standards. When such uncertainty is not considered, either over optimistic or over pessimistic optimization strategies may be developed, probably leading to the formulation of rigid remediation strategies. This study advances a mathematical programming modeling approach for optimizing groundwater remediation design. This approach not only prevents the formulation of over optimistic and over pessimistic optimization strategies but also provides a satisfaction level that indicates the degree to which the environmental quality standard is satisfied. Therefore the approach may be expected to be significantly more acknowledged by the decision maker than those who do not consider standard uncertainty. The proposed approach is applied to a petroleum-contaminated site in western Canada. Results from the case study show that (1) the peak benzene concentrations can always satisfy the environmental standard under the optimal strategy, (2) the pumping rates of all wells decrease under a relaxed standard or long-term remediation approach, (3) the pumping rates are less affected by environmental quality constraints under short-term remediation, and (4) increased flexible environmental standards have a reduced effect on the optimal remediation strategy.

  18. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  19. Use of the Occupational Therapy Task-Oriented Approach to optimize the motor performance of a client with cognitive limitations.

    PubMed

    Preissner, Katharine

    2010-01-01

    This case report describes the use of the Occupational Therapy Task-Oriented Approach with a client with occupational performance limitations after a cerebral vascular accident. The Occupational Therapy Task-Oriented Approach is often suggested as a preferred neurorehabilitation intervention to improve occupational performance by optimizing motor behavior. One common critique of this approach, however, is that it may seem inappropriate or have limited application for clients with cognitive deficits. This case report demonstrates how an occupational therapist working in an inpatient rehabilitation setting used the occupational therapy task-oriented evaluation framework and treatment principles described by Mathiowetz (2004) with a person with significant cognitive limitations. This approach was effective in assisting the client in meeting her long-term goals, maximizing her participation in meaningful occupations, and successfully transitioning to home with her daughter.

  20. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  1. Evaluating and minimizing noise impact due to aircraft flyover

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.; Cook, G.

    1979-01-01

    Existing techniques were used to assess the noise impact on a community due to aircraft operation and to optimize the flight paths of an approaching aircraft with respect to the annoyance produced. Major achievements are: (1) the development of a population model suitable for determining the noise impact, (2) generation of a numerical computer code which uses this population model along with the steepest descent algorithm to optimize approach/landing trajectories, (3) implementation of this optimization code in several fictitious cases as well as for the community surrounding Patrick Henry International Airport, Virginia.

  2. Shape Optimization of Rubber Bushing Using Differential Evolution Algorithm

    PubMed Central

    2014-01-01

    The objective of this study is to design rubber bushing at desired level of stiffness characteristics in order to achieve the ride quality of the vehicle. A differential evolution algorithm based approach is developed to optimize the rubber bushing through integrating a finite element code running in batch mode to compute the objective function values for each generation. Two case studies were given to illustrate the application of proposed approach. Optimum shape parameters of 2D bushing model were determined by shape optimization using differential evolution algorithm. PMID:25276848

  3. A case study on topology optimized design for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Gebisa, A. W.; Lemu, H. G.

    2017-12-01

    Topology optimization is an optimization method that employs mathematical tools to optimize material distribution in a part to be designed. Earlier developments of topology optimization considered conventional manufacturing techniques that have limitations in producing complex geometries. This has hindered the topology optimization efforts not to fully be realized. With the emergence of additive manufacturing (AM) technologies, the technology that builds a part layer upon a layer directly from three dimensional (3D) model data of the part, however, producing complex shape geometry is no longer an issue. Realization of topology optimization through AM provides full design freedom for the design engineers. The article focuses on topologically optimized design approach for additive manufacturing with a case study on lightweight design of jet engine bracket. The study result shows that topology optimization is a powerful design technique to reduce the weight of a product while maintaining the design requirements if additive manufacturing is considered.

  4. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  5. Fast approximate delivery of fluence maps for IMRT and VMAT

    NASA Astrophysics Data System (ADS)

    Balvert, Marleen; Craft, David

    2017-02-01

    In this article we provide a method to generate the trade-off between delivery time and fluence map matching quality for dynamically delivered fluence maps. At the heart of our method lies a mathematical programming model that, for a given duration of delivery, optimizes leaf trajectories and dose rates such that the desired fluence map is reproduced as well as possible. We begin with the single fluence map case and then generalize the model and the solution technique to the delivery of sequential fluence maps. The resulting large-scale, non-convex optimization problem was solved using a heuristic approach. We test our method using a prostate case and a head and neck case, and present the resulting trade-off curves. Analysis of the leaf trajectories reveals that short time plans have larger leaf openings in general than longer delivery time plans. Our method allows one to explore the continuum of possibilities between coarse, large segment plans characteristic of direct aperture approaches and narrow field plans produced by sliding window approaches. Exposing this trade-off will allow for an informed choice between plan quality and solution time. Further research is required to speed up the optimization process to make this method clinically implementable.

  6. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  7. Accounting for range uncertainties in the optimization of intensity modulated proton therapy.

    PubMed

    Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas

    2007-05-21

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.

  8. Global Optimization of Emergency Evacuation Assignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Lee; Yuan, Fang; Chin, Shih-Miao

    2006-01-01

    Conventional emergency evacuation plans often assign evacuees to fixed routes or destinations based mainly on geographic proximity. Such approaches can be inefficient if the roads are congested, blocked, or otherwise dangerous because of the emergency. By not constraining evacuees to prespecified destinations, a one-destination evacuation approach provides flexibility in the optimization process. We present a framework for the simultaneous optimization of evacuation-traffic distribution and assignment. Based on the one-destination evacuation concept, we can obtain the optimal destination and route assignment by solving a one-destination traffic-assignment problem on a modified network representation. In a county-wide, large-scale evacuation case study, the one-destinationmore » model yields substantial improvement over the conventional approach, with the overall evacuation time reduced by more than 60 percent. More importantly, emergency planners can easily implement this framework by instructing evacuees to go to destinations that the one-destination optimization process selects.« less

  9. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  10. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning.

    PubMed

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-07-20

    [Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  11. Investigation of Low-Reynolds-Number Rocket Nozzle Design Using PNS-Based Optimization Procedure

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Moin; Korte, John J.

    1996-01-01

    An optimization approach to rocket nozzle design, based on computational fluid dynamics (CFD) methodology, is investigated for low-Reynolds-number cases. This study is undertaken to determine the benefits of this approach over those of classical design processes such as Rao's method. A CFD-based optimization procedure, using the parabolized Navier-Stokes (PNS) equations, is used to design conical and contoured axisymmetric nozzles. The advantage of this procedure is that it accounts for viscosity during the design process; other processes make an approximated boundary-layer correction after an inviscid design is created. Results showed significant improvement in the nozzle thrust coefficient over that of the baseline case; however, the unusual nozzle design necessitates further investigation of the accuracy of the PNS equations for modeling expanding flows with thick laminar boundary layers.

  12. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  13. Optimization of wind plant layouts using an adjoint approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N.; Dykes, Katherine; Graf, Peter

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  14. Exact algorithms for haplotype assembly from whole-genome sequence data.

    PubMed

    Chen, Zhi-Zhong; Deng, Fei; Wang, Lusheng

    2013-08-15

    Haplotypes play a crucial role in genetic analysis and have many applications such as gene disease diagnoses, association studies, ancestry inference and so forth. The development of DNA sequencing technologies makes it possible to obtain haplotypes from a set of aligned reads originated from both copies of a chromosome of a single individual. This approach is often known as haplotype assembly. Exact algorithms that can give optimal solutions to the haplotype assembly problem are highly demanded. Unfortunately, previous algorithms for this problem either fail to output optimal solutions or take too long time even executed on a PC cluster. We develop an approach to finding optimal solutions for the haplotype assembly problem under the minimum-error-correction (MEC) model. Most of the previous approaches assume that the columns in the input matrix correspond to (putative) heterozygous sites. This all-heterozygous assumption is correct for most columns, but it may be incorrect for a small number of columns. In this article, we consider the MEC model with or without the all-heterozygous assumption. In our approach, we first use new methods to decompose the input read matrix into small independent blocks and then model the problem for each block as an integer linear programming problem, which is then solved by an integer linear programming solver. We have tested our program on a single PC [a Linux (x64) desktop PC with i7-3960X CPU], using the filtered HuRef and the NA 12878 datasets (after applying some variant calling methods). With the all-heterozygous assumption, our approach can optimally solve the whole HuRef data set within a total time of 31 h (26 h for the most difficult block of the 15th chromosome and only 5 h for the other blocks). To our knowledge, this is the first time that MEC optimal solutions are completely obtained for the filtered HuRef dataset. Moreover, in the general case (without the all-heterozygous assumption), for the HuRef dataset our approach can optimally solve all the chromosomes except the most difficult block in chromosome 15 within a total time of 12 days. For both of the HuRef and NA12878 datasets, the optimal costs in the general case are sometimes much smaller than those in the all-heterozygous case. This implies that some columns in the input matrix (after applying certain variant calling methods) still correspond to false-heterozygous sites. Our program, the optimal solutions found for the HuRef dataset available at http://rnc.r.dendai.ac.jp/hapAssembly.html.

  15. Launch Vehicle Propulsion Design with Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.

    2005-01-01

    The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.

  16. An intelligent case-adjustment algorithm for the automated design of population-based quality auditing protocols.

    PubMed

    Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A

    2004-01-01

    We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.

  17. Optimizing the function of upstanding activities in adult patients with acquired lesions of the central nervous system by using the Bobath concept approach - A case report.

    PubMed

    Jelica, Stjepan; Seper, Vesna; Davidović, Erna; Bujisić, Gordana

    2011-01-01

    Nonspecific medical gymnastic therapy may help patients after stroke achieve certain results in terms of efficiency but not in terms of quality of movement. The goal of treatment by Bobath concept is development of movement (effectiveness) and optimization of movement (efficiency). This article presents the case of a 62-year old patient who had experienced a stroke and has difficulties with standing up activities. It underscores the importance of not only recovery of function but also optimization of the function in such patients.

  18. Complexity Science Applications to Dynamic Trajectory Management: Research Strategies

    NASA Technical Reports Server (NTRS)

    Sawhill, Bruce; Herriot, James; Holmes, Bruce J.; Alexandrov, Natalia

    2009-01-01

    The promise of the Next Generation Air Transportation System (NextGen) is strongly tied to the concept of trajectory-based operations in the national airspace system. Existing efforts to develop trajectory management concepts are largely focused on individual trajectories, optimized independently, then de-conflicted among each other, and individually re-optimized, as possible. The benefits in capacity, fuel, and time are valuable, though perhaps could be greater through alternative strategies. The concept of agent-based trajectories offers a strategy for automation of simultaneous multiple trajectory management. The anticipated result of the strategy would be dynamic management of multiple trajectories with interacting and interdependent outcomes that satisfy multiple, conflicting constraints. These constraints would include the business case for operators, the capacity case for the Air Navigation Service Provider (ANSP), and the environmental case for noise and emissions. The benefits in capacity, fuel, and time might be improved over those possible under individual trajectory management approaches. The proposed approach relies on computational agent-based modeling (ABM), combinatorial mathematics, as well as application of "traffic physics" concepts to the challenge, and modeling and simulation capabilities. The proposed strategy could support transforming air traffic control from managing individual aircraft behaviors to managing systemic behavior of air traffic in the NAS. A system built on the approach could provide the ability to know when regions of airspace approach being "full," that is, having non-viable local solution space for optimizing trajectories in advance.

  19. Optimization of composite box-beam structures including effects of subcomponent interactions

    NASA Technical Reports Server (NTRS)

    Ragon, Scott A.; Guerdal, Zafer; Starnes, James H., Jr.

    1995-01-01

    Minimum mass designs are obtained for a simple box beam structure subject to bending, torque and combined bending/torque load cases. These designs are obtained subject to point strain and linear buckling constraints. The present work differs from previous efforts in that special attention is payed to including the effects of subcomponent panel interaction in the optimal design process. Two different approaches are used to impose the buckling constraints. When the global approach is used, buckling constraints are imposed on the global structure via a linear eigenvalue analysis. This approach allows the subcomponent panels to interact in a realistic manner. The results obtained using this approach are compared to results obtained using a traditional, less expensive approach, called the local approach. When the local approach is used, in-plane loads are extracted from the global model and used to impose buckling constraints on each subcomponent panel individually. In the global cases, it is found that there can be significant interaction between skin, spar, and rib design variables. This coupling is weak or nonexistent in the local designs. It is determined that weight savings of up to 7% may be obtained by using the global approach instead of the local approach to design these structures. Several of the designs obtained using the linear buckling analysis are subjected to a geometrically nonlinear analysis. For the designs which were subjected to bending loads, the innermost rib panel begins to collapse at less than half the intended design load and in a mode different from that predicted by linear analysis. The discrepancy between the predicted linear and nonlinear responses is attributed to the effects of the nonlinear rib crushing load, and the parameter which controls this rib collapse failure mode is shown to be the rib thickness. The rib collapse failure mode may be avoided by increasing the rib thickness above the value obtained from the (linear analysis based) optimizer. It is concluded that it would be necessary to include geometric nonlinearities in the design optimization process if the true optimum in this case were to be found.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Chiara, Gabriele; Fazio, Rosario; Montangero, Simone

    In this paper we present an approach to quantum cloning with unmodulated spin networks. The cloner is realized by a proper design of the network and a choice of the coupling between the qubits. We show that in the case of phase covariant cloner the XY coupling gives the best results. In the 1{yields}2 cloning we find that the value for the fidelity of the optimal cloner is achieved, and values comparable to the optimal ones in the general N{yields}M case can be attained. If a suitable set of network symmetries are satisfied, the output fidelity of the clones doesmore » not depend on the specific choice of the graph. We show that spin network cloning is robust against the presence of static imperfections. Moreover, in the presence of noise, it outperforms the conventional approach. In this case the fidelity exceeds the corresponding value obtained by quantum gates even for a very small amount of noise. Furthermore, we show how to use this method to clone qutrits and qudits. By means of the Heisenberg coupling it is also possible to implement the universal cloner although in this case the fidelity is 10% off that of the optimal cloner.« less

  1. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, Richard D.; Grodsinsky, Carlos M.; Allaire, Paul E.; Lewis, David W.; Knospe, Carl R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platform. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega) exp 4. Low frequency accelerations are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  2. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Grodsinsky, C. M.; Allaire, P. E.; Lewis, D. W.; Knospe, C. R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platforms. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward (preview) gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega)(exp 4). Low frequency accelerations (less than 50 Hz) are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  3. Rapid Preliminary Design of Interplanetary Trajectories Using the Evolutionary Mission Trajectory Generator

    NASA Technical Reports Server (NTRS)

    Englander, Jacob

    2016-01-01

    Preliminary design of interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a hybrid optimal control problem. The method is demonstrated on notional high-thrust chemical and low-thrust electric propulsion missions. In the low-thrust case, the hybrid optimal control problem is augmented to include systems design optimization.

  4. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  5. A graph decomposition-based approach for water distribution network optimization

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.

    2013-04-01

    A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.

  6. Vector-model-supported optimization in volumetric-modulated arc stereotactic radiotherapy planning for brain metastasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Long planning time in volumetric-modulated arc stereotactic radiotherapy (VMA-SRT) cases can limit its clinical efficiency and use. A vector model could retrieve previously successful radiotherapy cases that share various common anatomic features with the current case. The prsent study aimed to develop a vector model that could reduce planning time by applying the optimization parameters from those retrieved reference cases. Thirty-six VMA-SRT cases of brain metastasis (gender, male [n = 23], female [n = 13]; age range, 32 to 81 years old) were collected and used as a reference database. Another 10 VMA-SRT cases were planned with both conventional optimization and vector-model-supported optimization, followingmore » the oncologists' clinical dose prescriptions. Planning time and plan quality measures were compared using the 2-sided paired Wilcoxon signed rank test with a significance level of 0.05, with positive false discovery rate (pFDR) of less than 0.05. With vector-model-supported optimization, there was a significant reduction in the median planning time, a 40% reduction from 3.7 to 2.2 hours (p = 0.002, pFDR = 0.032), and for the number of iterations, a 30% reduction from 8.5 to 6.0 (p = 0.006, pFDR = 0.047). The quality of plans from both approaches was comparable. From these preliminary results, vector-model-supported optimization can expedite the optimization of VMA-SRT for brain metastasis while maintaining plan quality.« less

  7. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  8. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  9. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China

    PubMed Central

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI).” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China. PMID:28484707

  10. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China.

    PubMed

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong; Wang, Fahui

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed "two-step optimization for spatial accessibility improvement (2SO4SAI)." The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.

  11. Real-time energy-saving metro train rescheduling with primary delay identification

    PubMed Central

    Li, Keping; Schonfeld, Paul

    2018-01-01

    This paper aims to reschedule online metro trains in delay scenarios. A graph representation and a mixed integer programming model are proposed to formulate the optimization problem. The solution approach is a two-stage optimization method. In the first stage, based on a proposed train state graph and system analysis, the primary and flow-on delays are specifically analyzed and identified with a critical path algorithm. For the second stage a hybrid genetic algorithm is designed to optimize the schedule, with the delay identification results as input. Then, based on the infrastructure data of Beijing Subway Line 4 of China, case studies are presented to demonstrate the effectiveness and efficiency of the solution approach. The results show that the algorithm can quickly and accurately identify primary delays among different types of delays. The economic cost of energy consumption and total delay is considerably reduced (by more than 10% in each case). The computation time of the Hybrid-GA is low enough for rescheduling online. Sensitivity analyses further demonstrate that the proposed approach can be used as a decision-making support tool for operators. PMID:29474471

  12. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  13. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  14. Optimal Elevation and Configuration of Hanford's Double-Shell Tank Waste Mixer Pumps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onishi, Yasuo; Yokuda, Satoru T.; Majumder, Catherine A.

    The objective of this study was to compare the mixing performance of the Lawrence pump, which has injection nozzles at the top, with an alternative pump that has injection nozzles at the bottom, and to determine the optimal elevation for the alternative pump. Sixteen cases were evaluated: two sludge thicknesses at eight levels. A two-step evaluation approach was used: Step 1 to evaluate all 16 cases with the non-rotating mixer pump model and Step 2 to further evaluate four of those cases with the more realistic rotating mixer pump model. The TEMPEST code was used.

  15. A novel approach to identify optimal metabotypes of elongase and desaturase activities in prevention of acute coronary syndrome

    USDA-ARS?s Scientific Manuscript database

    Both metabolomic and genomic approaches are valuable for risk analysis, however typical approaches evaluating differences in means do not model the changes well. Gene polymorphisms that alter function would appear as distinct populations, or metabotypes, from the predominant one, in which case risk...

  16. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    NASA Astrophysics Data System (ADS)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  17. Optimal phase estimation with arbitrary a priori knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal

    2011-06-15

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less

  18. Spatial and temporal optimization in habitat placement for a threatened plant: the case of the western prairie fringed orchid

    Treesearch

    John Hof; Carolyn Hull Sieg; Michael Bevers

    1999-01-01

    This paper investigates an optimization approach to determining the placement and timing of habitat protection for the western prairie fringed orchid. This plant’s population dynamics are complex, creating a challenging optimization problem. The sensitivity of the orchid to random climate conditions is handled probabilistically. The plant’s seed, protocorm and above-...

  19. Optimal management of substrates in anaerobic co-digestion: An ant colony algorithm approach.

    PubMed

    Verdaguer, Marta; Molinos-Senante, María; Poch, Manel

    2016-04-01

    Sewage sludge (SWS) is inevitably produced in urban wastewater treatment plants (WWTPs). The treatment of SWS on site at small WWTPs is not economical; therefore, the SWS is typically transported to an alternative SWS treatment center. There is increased interest in the use of anaerobic digestion (AnD) with co-digestion as an SWS treatment alternative. Although the availability of different co-substrates has been ignored in most of the previous studies, it is an essential issue for the optimization of AnD co-digestion. In a pioneering approach, this paper applies an Ant-Colony-Optimization (ACO) algorithm that maximizes the generation of biogas through AnD co-digestion in order to optimize the discharge of organic waste from different waste sources in real-time. An empirical application is developed based on a virtual case study that involves organic waste from urban WWTPs and agrifood activities. The results illustrate the dominate role of toxicity levels in selecting contributions to the AnD input. The methodology and case study proposed in this paper demonstrate the usefulness of the ACO approach in supporting a decision process that contributes to improving the sustainability of organic waste and SWS management. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  1. A condition-specific codon optimization approach for improved heterologous gene expression in Saccharomyces cerevisiae

    PubMed Central

    2014-01-01

    Background Heterologous gene expression is an important tool for synthetic biology that enables metabolic engineering and the production of non-natural biologics in a variety of host organisms. The translational efficiency of heterologous genes can often be improved by optimizing synonymous codon usage to better match the host organism. However, traditional approaches for optimization neglect to take into account many factors known to influence synonymous codon distributions. Results Here we define an alternative approach for codon optimization that utilizes systems level information and codon context for the condition under which heterologous genes are being expressed. Furthermore, we utilize a probabilistic algorithm to generate multiple variants of a given gene. We demonstrate improved translational efficiency using this condition-specific codon optimization approach with two heterologous genes, the fluorescent protein-encoding eGFP and the catechol 1,2-dioxygenase gene CatA, expressed in S. cerevisiae. For the latter case, optimization for stationary phase production resulted in nearly 2.9-fold improvements over commercial gene optimization algorithms. Conclusions Codon optimization is now often a standard tool for protein expression, and while a variety of tools and approaches have been developed, they do not guarantee improved performance for all hosts of applications. Here, we suggest an alternative method for condition-specific codon optimization and demonstrate its utility in Saccharomyces cerevisiae as a proof of concept. However, this technique should be applicable to any organism for which gene expression data can be generated and is thus of potential interest for a variety of applications in metabolic and cellular engineering. PMID:24636000

  2. A heuristic approach to optimization of structural topology including self-weight

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization of structures under a design-dependent self-weight load is investigated in this paper. The problem deserves attention because of its significant importance in the engineering practice, especially nowadays as topology optimization is more often applied when designing large engineering structures, for example, bridges or carrying systems of tall buildings. It is worth noting that well-known approaches of topology optimization which have been successfully applied to structures under fixed loads cannot be directly adapted to the case of design-dependent loads, so that topology generation can be a challenge also for numerical algorithms. The paper presents the application of a simple but efficient non-gradient method to topology optimization of elastic structures under self-weight loading. The algorithm is based on the Cellular Automata concept, the application of which can produce effective solutions with low computational cost.

  3. Major thoracic surgery in Jehovah's witness: A multidisciplinary approach case report

    PubMed Central

    Rispoli, Marco; Bergaminelli, Carlo; Nespoli, Moana Rossella; Esposito, Mariana; Mattiacci, Dario Maria; Corcione, Antonio; Buono, Salvatore

    2016-01-01

    Introduction A bloodless surgery can be desirable also for non Jehovah’s witnesses patients, but requires a team approach from the very first assessment to ensure adequate planning. Presentation of the case Our patient, a Jehovah’s witnesses, was scheduled for right lower lobectomy due to pulmonary adenocarcinoma. Her firm denies to receive any kind of transfusions, forced clinicians to a bloodless management of the case. Discussion Before surgery a meticulous coagulopathy research and hemodynamic optimization are useful to prepare patient to operation. During surgery, controlled hypotension can help to obtain effective hemostasis. After surgery, clinicians monitored any possible active bleeding, using continuous noninvasive hemoglobin monitoring, limiting the blood loss due to serial in vitro testing. The optimization of cardiac index and delivery of oxygen were continued to grant a fast recovery. Conclusion Bloodless surgery is likely to gain popularity, and become standard practice for all patients. The need for transfusion should be targeted on individual case, avoiding strictly fixed limit often leading to unnecessary transfusion. PMID:27107502

  4. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  5. Robust Constrained Optimization Approach to Control Design for International Space Station Centrifuge Rotor Auto Balancing Control System

    NASA Technical Reports Server (NTRS)

    Postma, Barry Dirk

    2005-01-01

    This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.

  6. Comparison of anatomy-based, fluence-based and aperture-based treatment planning approaches for VMAT

    NASA Astrophysics Data System (ADS)

    Rao, Min; Cao, Daliang; Chen, Fan; Ye, Jinsong; Mehta, Vivek; Wong, Tony; Shepard, David

    2010-11-01

    Volumetric modulated arc therapy (VMAT) has the potential to reduce treatment times while producing comparable or improved dose distributions relative to fixed-field intensity-modulated radiation therapy. In order to take full advantage of the VMAT delivery technique, one must select a robust inverse planning tool. The purpose of this study was to evaluate the effectiveness and efficiency of VMAT planning techniques of three categories: anatomy-based, fluence-based and aperture-based inverse planning. We have compared these techniques in terms of the plan quality, planning efficiency and delivery efficiency. Fourteen patients were selected for this study including six head-and-neck (HN) cases, and two cases each of prostate, pancreas, lung and partial brain. For each case, three VMAT plans were created. The first VMAT plan was generated based on the anatomical geometry. In the Elekta ERGO++ treatment planning system (TPS), segments were generated based on the beam's eye view (BEV) of the target and the organs at risk. The segment shapes were then exported to Pinnacle3 TPS followed by segment weight optimization and final dose calculation. The second VMAT plan was generated by converting optimized fluence maps (calculated by the Pinnacle3 TPS) into deliverable arcs using an in-house arc sequencer. The third VMAT plan was generated using the Pinnacle3 SmartArc IMRT module which is an aperture-based optimization method. All VMAT plans were delivered using an Elekta Synergy linear accelerator and the plan comparisons were made in terms of plan quality and delivery efficiency. The results show that for cases of little or modest complexity such as prostate, pancreas, lung and brain, the anatomy-based approach provides similar target coverage and critical structure sparing, but less conformal dose distributions as compared to the other two approaches. For more complex HN cases, the anatomy-based approach is not able to provide clinically acceptable VMAT plans while highly conformal dose distributions were obtained using both aperture-based and fluence-based inverse planning techniques. The aperture-based approach provides improved dose conformity than the fluence-based technique in complex cases.

  7. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  8. Parameter Optimization for Feature and Hit Generation in a General Unknown Screening Method-Proof of Concept Study Using a Design of Experiment Approach for a High Resolution Mass Spectrometry Procedure after Data Independent Acquisition.

    PubMed

    Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas

    2018-03-06

    High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.

  9. Automated geometric optimization for robotic HIFU treatment of liver tumors.

    PubMed

    Williamson, Tom; Everitt, Scott; Chauhan, Sunita

    2018-05-01

    High intensity focused ultrasound (HIFU) represents a non-invasive method for the destruction of cancerous tissue within the body. Heating of targeted tissue by focused ultrasound transducers results in the creation of ellipsoidal lesions at the target site, the locations of which can have a significant impact on treatment outcomes. Towards this end, this work describes a method for the optimization of lesion positions within arbitrary tumors, with specific anatomical constraints. A force-based optimization framework was extended to the case of arbitrary tumor position and constrained orientation. Analysis of the approximate reachable treatment volume for the specific case of treatment of liver tumors was performed based on four transducer configurations and constraint conditions derived. Evaluation was completed utilizing simplified spherical and ellipsoidal tumor models and randomly generated tumor volumes. The total volume treated, lesion overlap and healthy tissue ablated was evaluated. Two evaluation scenarios were defined and optimized treatment plans assessed. The optimization framework resulted in improvements of up to 10% in tumor volume treated, and reductions of up to 20% in healthy tissue ablated as compared to the standard lesion rastering approach. Generation of optimized plans proved feasible for both sub- and intercostally located tumors. This work describes an optimized method for the planning of lesion positions during HIFU treatment of liver tumors. The approach allows the determination of optimal lesion locations and orientations, and can be applied to arbitrary tumor shapes and sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Wang, J

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less

  11. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less

  12. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Weihong; Sun, Kai; Qi, Junjian

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less

  13. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  14. Optimal Window and Lattice in Gabor Transform. Application to Audio Analysis.

    PubMed

    Lachambre, Helene; Ricaud, Benjamin; Stempfel, Guillaume; Torrésani, Bruno; Wiesmeyr, Christoph; Onchis-Moaca, Darian

    2015-01-01

    This article deals with the use of optimal lattice and optimal window in Discrete Gabor Transform computation. In the case of a generalized Gaussian window, extending earlier contributions, we introduce an additional local window adaptation technique for non-stationary signals. We illustrate our approach and the earlier one by addressing three time-frequency analysis problems to show the improvements achieved by the use of optimal lattice and window: close frequencies distinction, frequency estimation and SNR estimation. The results are presented, when possible, with real world audio signals.

  15. Optimizing the Usability of Brain-Computer Interfaces.

    PubMed

    Zhang, Yin; Chase, Steve M

    2018-05-01

    Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.

  16. Academic consortium for the evaluation of computer-aided diagnosis (CADx) in mammography

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Freedman, Matthew T.; Wu, Chris Y.; Lo, Shih-Chung B.; Floyd, Carey E., Jr.; Lo, Joseph Y.; Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Wei, Datong; Chakraborty, Dev P.; Clarke, Laurence P.; Kallergi, Maria; Clark, Bob; Kim, Yongmin

    1995-04-01

    Computer aided diagnosis (CADx) is a promising technology for the detection of breast cancer in screening mammography. A number of different approaches have been developed for CADx research that have achieved significant levels of performance. Research teams now recognize the need for a careful and detailed evaluation study of approaches to accelerate the development of CADx, to make CADx more clinically relevant and to optimize the CADx algorithms based on unbiased evaluations. The results of such a comparative study may provide each of the participating teams with new insights into the optimization of their individual CADx algorithms. This consortium of experienced CADx researchers is working as a group to compare results of the algorithms and to optimize the performance of CADx algorithms by learning from each other. Each institution will be contributing an equal number of cases that will be collected under a standard protocol for case selection, truth determination, and data acquisition to establish a common and unbiased database for the evaluation study. An evaluation procedure for the comparison studies are being developed to analyze the results of individual algorithms for each of the test cases in the common database. Optimization of individual CADx algorithms can be made based on the comparison studies. The consortium effort is expected to accelerate the eventual clinical implementation of CADx algorithms at participating institutions.

  17. A Fuzzy Approach of the Competition on the Air Transport Market

    NASA Technical Reports Server (NTRS)

    Charfeddine, Souhir; DeColigny, Marc; Camino, Felix Mora; Cosenza, Carlos Alberto Nunes

    2003-01-01

    The aim of this communication is to study with a new scope the conditions of the equilibrium in an air transport market where two competitive airlines are operating. Each airline is supposed to adopt a strategy maximizing its profit while its estimation of the demand has a fuzzy nature. This leads each company to optimize a program of its proposed services (frequency of the flights and ticket prices) characterized by some fuzzy parameters. The case of monopoly is being taken as a benchmark. Classical convex optimization can be used to solve this decision problem. This approach provides the airline with a new decision tool where uncertainty can be taken into account explicitly. The confrontation of the strategies of the companies, in the ease of duopoly, leads to the definition of a fuzzy equilibrium. This concept of fuzzy equilibrium is more general and can be applied to several other domains. The formulation of the optimization problem and the methodological consideration adopted for its resolution are presented in their general theoretical aspect. In the case of air transportation, where the conditions of management of operations are critical, this approach should offer to the manager elements needed to the consolidation of its decisions depending on the circumstances (ordinary, exceptional events,..) and to be prepared to face all possibilities. Keywords: air transportation, competition equilibrium, convex optimization , fuzzy modeling,

  18. Numerical solution of a conspicuous consumption model with constant control delay☆

    PubMed Central

    Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea

    2011-01-01

    We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871

  19. Path synthesis of four-bar mechanisms using synergy of polynomial neural network and Stackelberg game theory

    NASA Astrophysics Data System (ADS)

    Ahmadi, Bahman; Nariman-zadeh, Nader; Jamali, Ali

    2017-06-01

    In this article, a novel approach based on game theory is presented for multi-objective optimal synthesis of four-bar mechanisms. The multi-objective optimization problem is modelled as a Stackelberg game. The more important objective function, tracking error, is considered as the leader, and the other objective function, deviation of the transmission angle from 90° (TA), is considered as the follower. In a new approach, a group method of data handling (GMDH)-type neural network is also utilized to construct an approximate model for the rational reaction set (RRS) of the follower. Using the proposed game-theoretic approach, the multi-objective optimal synthesis of a four-bar mechanism is then cast into a single-objective optimal synthesis using the leader variables and the obtained RRS of the follower. The superiority of using the synergy game-theoretic method of Stackelberg with a GMDH-type neural network is demonstrated for two case studies on the synthesis of four-bar mechanisms.

  20. Solar Plus: A Holistic Approach to Distributed Solar PV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OShaughnessy, Eric J.; Ardani, Kristen B.; Cutler, Dylan S.

    Solar 'plus' refers to an emerging approach to distributed solar photovoltaic (PV) deployment that uses energy storage and controllable devices to optimize customer economics. The solar plus approach increases customer system value through technologies such as electric batteries, smart domestic water heaters, smart air-conditioner (AC) units, and electric vehicles We use an NREL optimization model to explore the customer-side economics of solar plus under various utility rate structures and net metering rates. We explore optimal solar plus applications in five case studies with different net metering rates and rate structures. The model deploys different configurations of PV, batteries, smart domesticmore » water heaters, and smart AC units in response to different rate structures and customer load profiles. The results indicate that solar plus improves the customer economics of PV and may mitigate some of the negative impacts of evolving rate structures on PV economics. Solar plus may become an increasingly viable model for optimizing PV customer economics in an evolving rate environment.« less

  1. Advanced optimal design concepts for composite material aircraft repair

    NASA Astrophysics Data System (ADS)

    Renaud, Guillaume

    The application of an automated optimization approach for bonded composite patch design is investigated. To do so, a finite element computer analysis tool to evaluate patch design quality was developed. This tool examines both the mechanical and the thermal issues of the problem. The optimized shape is obtained with a bi-quadratic B-spline surface that represents the top surface of the patch. Additional design variables corresponding to the ply angles are also used. Furthermore, a multi-objective optimization approach was developed to treat multiple and uncertain loads. This formulation aims at designing according to the most unfavorable mechanical and thermal loads. The problem of finding the optimal patch shape for several situations is addressed. The objective is to minimize a stress component at a specific point in the host structure (plate) while ensuring acceptable stress levels in the adhesive. A parametric study is performed in order to identify the effects of various shape parameters on the quality of the repair and its optimal configuration. The effects of mechanical loads and service temperature are also investigated. Two bonding methods are considered, as they imply different thermal histories. It is shown that the proposed techniques are effective and inexpensive for analyzing and optimizing composite patch repairs. It is also shown that thermal effects should not only be present in the analysis, but that they play a paramount role on the resulting quality of the optimized design. In all cases, the optimized configuration results in a significant reduction of the desired stress level by deflecting the loads away from rather than over the damage zone, as is the case with standard designs. Furthermore, the automated optimization ensures the safety of the patch design for all considered operating conditions.

  2. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  3. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  4. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  5. A two‐point scheme for optimal breast IMRT treatment planning

    PubMed Central

    2013-01-01

    We propose an approach to determining optimal beam weights in breast/chest wall IMRT treatment plans. The goal is to decrease breathing effect and to maximize skin dose if the skin is included in the target or, otherwise, to minimize the skin dose. Two points in the target are utilized to calculate the optimal weights. The optimal plan (i.e., the plan with optimal beam weights) consists of high energy unblocked beams, low energy unblocked beams, and IMRT beams. Six breast and five chest wall cases were retrospectively planned with this scheme in Eclipse, including one breast case where CTV was contoured by the physician. Compared with 3D CRT plans composed of unblocked and field‐in‐field beams, the optimal plans demonstrated comparable or better dose uniformity, homogeneity, and conformity to the target, especially at beam junction when supraclavicular nodes are involved. Compared with nonoptimal plans (i.e., plans with nonoptimized weights), the optimal plans had better dose distributions at shallow depths close to the skin, especially in cases where breathing effect was taken into account. This was verified with experiments using a MapCHECK device attached to a motion simulation table (to mimic motion caused by breathing). PACS number: 87.55 de PMID:24257291

  6. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  7. WE-AB-209-10: Optimizing the Delivery of Sequential Fluence Maps for Efficient VMAT Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, D; Balvert, M

    2016-06-15

    Purpose: To develop an optimization model and solution approach for computing MLC leaf trajectories and dose rates for high quality matching of a set of optimized fluence maps to be delivered sequentially around a patient in a VMAT treatment. Methods: We formulate the fluence map matching problem as a nonlinear optimization problem where time is discretized but dose rates and leaf positions are continuous variables. For a given allotted time, which is allocated across the fluence maps based on the complexity of each fluence map, the optimization problem searches for the best leaf trajectories and dose rates such that themore » original fluence maps are closely recreated. Constraints include maximum leaf speed, maximum dose rate, and leaf collision avoidance, as well as the constraint that the ending leaf positions for one map are the starting leaf positions for the next map. The resulting model is non-convex but smooth, and therefore we solve it by local searches from a variety of starting positions. We improve solution time by a custom decomposition approach which allows us to decouple the rows of the fluence maps and solve each leaf pair individually. This decomposition also makes the problem easily parallelized. Results: We demonstrate method on a prostate case and a head-and-neck case and show that one can recreate fluence maps to high degree of fidelity in modest total delivery time (minutes). Conclusion: We present a VMAT sequencing method that reproduces optimal fluence maps by searching over a vast number of possible leaf trajectories. By varying the total allotted time given, this approach is the first of its kind to allow users to produce VMAT solutions that span the range of wide-field coarse VMAT deliveries to narrow-field high-MU sliding window-like approaches.« less

  8. Continuous intensity map optimization (CIMO): A novel approach to leaf sequencing in step and shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao Daliang; Earl, Matthew A.; Luan, Shuang

    2006-04-15

    A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less

  9. Optimized operation of dielectric laser accelerators: Multibunch

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-06-01

    We present a self-consistent analysis to determine the optimal charge, gradient, and efficiency for laser driven accelerators operating with a train of microbunches. Specifically, we account for the beam loading reduction on the material occurring at the dielectric-vacuum interface. In the case of a train of microbunches, such beam loading effect could be detrimental due to energy spread, however this may be compensated by a tapered laser pulse. We ultimately propose an optimization procedure with an analytical solution for group velocity which equals to half the speed of light. This optimization results in a maximum efficiency 20% lower than the single bunch case, and a total accelerated charge of 1 06 electrons in the train. The approach holds promise for improving operations of dielectric laser accelerators and may have an impact on emerging laser accelerators driven by high-power optical lasers.

  10. A framework for using ant colony optimization to schedule environmental flow management alternatives for rivers, wetlands, and floodplains

    NASA Astrophysics Data System (ADS)

    Szemis, J. M.; Maier, H. R.; Dandy, G. C.

    2012-08-01

    Rivers, wetlands, and floodplains are in need of management as they have been altered from natural conditions and are at risk of vanishing because of river development. One method to mitigate these impacts involves the scheduling of environmental flow management alternatives (EFMA); however, this is a complex task as there are generally a large number of ecological assets (e.g., wetlands) that need to be considered, each with species with competing flow requirements. Hence, this problem evolves into an optimization problem to maximize an ecological benefit within constraints imposed by human needs and the physical layout of the system. This paper presents a novel optimization framework which uses ant colony optimization to enable optimal scheduling of EFMAs, given constraints on the environmental water that is available. This optimization algorithm is selected because, unlike other currently popular algorithms, it is able to account for all aspects of the problem. The approach is validated by comparing it to a heuristic approach, and its utility is demonstrated using a case study based on the Murray River in South Australia to investigate (1) the trade-off between plant recruitment (i.e., promoting germination) and maintenance (i.e., maintaining habitat) flow requirements, (2) the trade-off between flora and fauna flow requirements, and (3) a hydrograph inversion case. The results demonstrate the usefulness and flexibility of the proposed framework as it is able to determine EFMA schedules that provide optimal or near-optimal trade-offs between the competing needs of species under a range of operating conditions and valuable insight for managers.

  11. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  12. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  13. Airway Management of Near-Complete Tracheal Transection by Through-the-Wound Intubation: A Case Report.

    PubMed

    Jean, Yuel-Kai; Potnuru, Paul; Diez, Christian

    2018-06-11

    We present an approach to airway management in a patient with machete injuries culminating in near-complete cricotracheal transection, in addition to a gunshot wound to the neck. Initial airway was established by direct intubation through the cricotracheal wound. Once the airway was secured, a bronchoscopy-guided orotracheal intubation was performed with simultaneous retraction of the cricotracheal airway to optimize the surgical field. This case offers insight into a rarely performed approach to airway management. Furthermore, our case report demonstrates that, in select airway injuries, performing through-the-wound intubation engenders a multitude of benefits.

  14. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies.

    PubMed

    Rada, E C; Ragazzi, M; Fedrizzi, P

    2013-04-01

    Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspects related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Management of unmanned moving sensors through human decision layers: a bi-level optimization process with calls to costly sub-processes

    NASA Astrophysics Data System (ADS)

    Dambreville, Frédéric

    2013-10-01

    While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.

  16. Optimizing Motion Planning for Hyper Dynamic Manipulator

    NASA Astrophysics Data System (ADS)

    Aboura, Souhila; Omari, Abdelhafid; Meguenni, Kadda Zemalache

    2012-01-01

    This paper investigates the optimal motion planning for an hyper dynamic manipulator. As case study, we consider a golf swing robot which is consisting with two actuated joint and a mechanical stoppers. Genetic Algorithm (GA) technique is proposed to solve the optimal golf swing motion which is generated by Fourier series approximation. The objective function for GA approach is to minimizing the intermediate and final state, minimizing the robot's energy consummation and maximizing the robot's speed. Obtained simulation results show the effectiveness of the proposed scheme.

  17. Including robustness in multi-criteria optimization for intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David

    2012-02-01

    We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for each Pareto optimal plan takes less than 5 min on a standard computer, making a computationally friendly interface possible to the planner. In conclusion, the uncertainty pertinent to the IMPT procedure can be reduced during treatment planning by optimizing plans that emphasize different treatment objectives, including robustness, and then interactively seeking for a most-preferred one from the solution Pareto surface.

  18. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  19. Thermodynamic heuristics with case-based reasoning: combined insights for RNA pseudoknot secondary structure.

    PubMed

    Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni

    2011-08-01

    The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.

  20. Variational Approach in the Theory of Liquid-Crystal State

    NASA Astrophysics Data System (ADS)

    Gevorkyan, E. V.

    2018-03-01

    The variational calculus by Leonhard Euler is the basis for modern mathematics and theoretical physics. The efficiency of variational approach in statistical theory of liquid-crystal state and in general case in condensed state theory is shown. The developed approach in particular allows us to introduce correctly effective pair interactions and optimize the simple models of liquid crystals with help of realistic intermolecular potentials.

  1. Optimal design of studies of influenza transmission in households. I: case-ascertained studies.

    PubMed

    Klick, B; Leung, G M; Cowling, B J

    2012-01-01

    Case-ascertained household transmission studies, in which households including an 'index case' are recruited and followed up, are invaluable to understanding the epidemiology of influenza. We used a simulation approach parameterized with data from household transmission studies to evaluate alternative study designs. We compared studies that relied on self-reported illness in household contacts vs. studies that used home visits to collect swab specimens for virological confirmation of secondary infections, allowing for the trade-off between sample size vs. intensity of follow-up given a fixed budget. For studies estimating the secondary attack proportion, 2-3 follow-up visits with specimens collected from all members regardless of illness were optimal. However, for studies comparing secondary attack proportions between two or more groups, such as controlled intervention studies, designs with reactive home visits following illness reports in contacts were most powerful, while a design with one home visit optimally timed also performed well.

  2. On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mather, Barry

    This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less

  3. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  4. An H(∞) control approach to robust learning of feedforward neural networks.

    PubMed

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Application of genetic algorithm in integrated setup planning and operation sequencing

    NASA Astrophysics Data System (ADS)

    Kafashi, Sajad; Shakeri, Mohsen

    2011-01-01

    Process planning is an essential component for linking design and manufacturing process. Setup planning and operation sequencing is two main tasks in process planning. Many researches solved these two problems separately. Considering the fact that the two functions are complementary, it is necessary to integrate them more tightly so that performance of a manufacturing system can be improved economically and competitively. This paper present a generative system and genetic algorithm (GA) approach to process plan the given part. The proposed approach and optimization methodology analyses the TAD (tool approach direction), tolerance relation between features and feature precedence relations to generate all possible setups and operations using workshop resource database. Based on these technological constraints the GA algorithm approach, which adopts the feature-based representation, optimizes the setup plan and sequence of operations using cost indices. Case study show that the developed system can generate satisfactory results in optimizing the setup planning and operation sequencing simultaneously in feasible condition.

  6. An optimized data fusion method and its application to improve lateral boundary conditions in winter for Pearl River Delta regional PM2.5 modeling, China

    NASA Astrophysics Data System (ADS)

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran

    2018-05-01

    Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.

  7. Convergence analysis of evolutionary algorithms that are based on the paradigm of information geometry.

    PubMed

    Beyer, Hans-Georg

    2014-01-01

    The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.

  8. On Per-Phase Topology Control and Switching in Emerging Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Mousavi, Mirrasoul J.

    This paper presents a new concept and approach for topology control and switching in distribution systems by extending the traditional circuit switching to laterals and single-phase loads. Voltage unbalance and other key performance indicators including voltage magnitudes, line loading, and energy losses are used to characterize and demonstrate the technical value of optimizing system topology on a per-phase basis in response to feeder conditions. The near-optimal per-phase topology control is defined as a series of hierarchical optimization problems. The proposed approach is respectively applied to IEEE 13-bus and 123-bus test systems for demonstration, which included the impact of integrating electricmore » vehicles (EVs) in the test circuit. It is concluded that the proposed approach can be effectively leveraged to improve voltage profiles with electric vehicles, the extent of which depends upon the performance of the base case without EVs.« less

  9. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri A; Shi, Ying; Christensen, Dane T

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  10. Modeling Stationary Lithium-Ion Batteries for Optimization and Predictive Control: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raszmann, Emma; Baker, Kyri; Shi, Ying

    Accurately modeling stationary battery storage behavior is crucial to understand and predict its limitations in demand-side management scenarios. In this paper, a lithium-ion battery model was derived to estimate lifetime and state-of-charge for building-integrated use cases. The proposed battery model aims to balance speed and accuracy when modeling battery behavior for real-time predictive control and optimization. In order to achieve these goals, a mixed modeling approach was taken, which incorporates regression fits to experimental data and an equivalent circuit to model battery behavior. A comparison of the proposed battery model output to actual data from the manufacturer validates the modelingmore » approach taken in the paper. Additionally, a dynamic test case demonstrates the effects of using regression models to represent internal resistance and capacity fading.« less

  11. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Evolutionary computing for the design search and optimization of space vehicle power subsystems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Klimeck, Gerhard; Hanks, David; Hua, Hook

    2004-01-01

    Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment. Out preliminary results demonstrate that this approach has the potential to improve the space system trade study process by allowing engineers to statistically weight subsystem goals of mass, cost and performance then automatically size power elements based on anticipated performance of the subsystem rather than on worst-case estimates.

  13. Forecasting peaks of seasonal influenza epidemics.

    PubMed

    Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John

    2013-06-21

    We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.

  14. Better Redd than Dead: Optimizing Reservoir Operations for Wild Fish Survival During Drought

    NASA Astrophysics Data System (ADS)

    Adams, L. E.; Lund, J. R.; Quiñones, R.

    2014-12-01

    Extreme droughts are difficult to predict and may incur large economic and ecological costs. Dam operations in drought usually consider minimizing economic costs. However, dam operations also offer an opportunity to increase wild fish survival under difficult conditions. Here, we develop a probabilistic optimization approach to developing reservoir release schedules to maximize fish survival in regulated rivers. A case study applies the approach to wild Fall-run Chinook Salmon below Folsom Dam on California's American River. Our results indicate that releasing more water early in the drought will, on average, save more wild fish over the long term.

  15. An optimized solution of multi-criteria evaluation analysis of landslide susceptibility using fuzzy sets and Kalman filter

    NASA Astrophysics Data System (ADS)

    Gorsevski, Pece V.; Jankowski, Piotr

    2010-08-01

    The Kalman recursive algorithm has been very widely used for integrating navigation sensor data to achieve optimal system performances. This paper explores the use of the Kalman filter to extend the aggregation of spatial multi-criteria evaluation (MCE) and to find optimal solutions with respect to a decision strategy space where a possible decision rule falls. The approach was tested in a case study in the Clearwater National Forest in central Idaho, using existing landslide datasets from roaded and roadless areas and terrain attributes. In this approach, fuzzy membership functions were used to standardize terrain attributes and develop criteria, while the aggregation of the criteria was achieved by the use of a Kalman filter. The approach presented here offers advantages over the classical MCE theory because the final solution includes both the aggregated solution and the areas of uncertainty expressed in terms of standard deviation. A comparison of this methodology with similar approaches suggested that this approach is promising for predicting landslide susceptibility and further application as a spatial decision support system.

  16. Optimization of Geothermal Well Placement under Geological Uncertainty

    NASA Astrophysics Data System (ADS)

    Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian

    2017-04-01

    Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.

  17. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  18. Global Search Capabilities of Indirect Methods for Impulsive Transfers

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong

    2015-09-01

    An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.

  19. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  20. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

    PubMed

    Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

    2017-08-01

    Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Co-state initialization for the minimum-time low-thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya

    2017-05-01

    This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.

  2. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  3. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  4. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  5. Full space device optimization for solar cells.

    PubMed

    Baloch, Ahmer A B; Aly, Shahzada P; Hossain, Mohammad I; El-Mellouhi, Fedwa; Tabet, Nouar; Alharbi, Fahhad H

    2017-09-20

    Advances in computational materials have paved a way to design efficient solar cells by identifying the optimal properties of the device layers. Conventionally, the device optimization has been governed by single or double descriptors for an individual layer; mostly the absorbing layer. However, the performance of the device depends collectively on all the properties of the material and the geometry of each layer in the cell. To address this issue of multi-property optimization and to avoid the paradigm of reoccurring materials in the solar cell field, a full space material-independent optimization approach is developed and presented in this paper. The method is employed to obtain an optimized material data set for maximum efficiency and for targeted functionality for each layer. To ensure the robustness of the method, two cases are studied; namely perovskite solar cells device optimization and cadmium-free CIGS solar cell. The implementation determines the desirable optoelectronic properties of transport mediums and contacts that can maximize the efficiency for both cases. The resulted data sets of material properties can be matched with those in materials databases or by further microscopic material design. Moreover, the presented multi-property optimization framework can be extended to design any solid-state device.

  6. Metaheuristic optimization approaches to predict shear-wave velocity from conventional well logs in sandstone and carbonate case studies

    NASA Astrophysics Data System (ADS)

    Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi

    2018-06-01

    Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.

  7. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.

  8. Image segmentation using local shape and gray-level appearance models

    NASA Astrophysics Data System (ADS)

    Seghers, Dieter; Loeckx, Dirk; Maes, Frederik; Suetens, Paul

    2006-03-01

    A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.

  9. [To delay may be wise].

    PubMed

    Melfa, G; Bernardi, L; Tettamanti, M; Mangano, S

    2004-01-01

    We report a case of acute renal failure, quickly evolved, in which the coexistence of parenchimal nephropaty and renal mass, have induced not a common diagnostic and therapeutic approach, finalized to optimize the interventional nephrology procedures, with the use of various imaging procedures. It is followed a multidisciplinar therapeutic approach, with the employment of dialysis, steroid therapy and surgical treatment.

  10. Time and frequency constrained sonar signal design for optimal detection of elastic objects.

    PubMed

    Hamschin, Brandon; Loughlin, Patrick J

    2013-04-01

    In this paper, the task of model-based transmit signal design for optimizing detection is considered. Building on past work that designs the spectral magnitude for optimizing detection, two methods for synthesizing minimum duration signals with this spectral magnitude are developed. The methods are applied to the design of signals that are optimal for detecting elastic objects in the presence of additive noise and self-noise. Elastic objects are modeled as linear time-invariant systems with known impulse responses, while additive noise (e.g., ocean noise or receiver noise) and acoustic self-noise (e.g., reverberation or clutter) are modeled as stationary Gaussian random processes with known power spectral densities. The first approach finds the waveform that preserves the optimal spectral magnitude while achieving the minimum temporal duration. The second approach yields a finite-length time-domain sequence by maximizing temporal energy concentration, subject to the constraint that the spectral magnitude is close (in a least-squares sense) to the optimal spectral magnitude. The two approaches are then connected analytically, showing the former is a limiting case of the latter. Simulation examples that illustrate the theory are accompanied by discussions that address practical applicability and how one might satisfy the need for target and environmental models in the real-world.

  11. Multiobjective optimization of urban water resources: Moving toward more practical solutions

    NASA Astrophysics Data System (ADS)

    Mortazavi, Mohammad; Kuczera, George; Cui, Lijie

    2012-03-01

    The issue of drought security is of paramount importance for cities located in regions subject to severe prolonged droughts. The prospect of "running out of water" for an extended period would threaten the very existence of the city. Managing drought security for an urban water supply is a complex task involving trade-offs between conflicting objectives. In this paper a multiobjective optimization approach for urban water resource planning and operation is developed to overcome practically significant shortcomings identified in previous work. A case study based on the headworks system for Sydney (Australia) demonstrates the approach and highlights the potentially serious shortcomings of Pareto optimal solutions conditioned on short climate records, incomplete decision spaces, and constraints to which system response is sensitive. Where high levels of drought security are required, optimal solutions conditioned on short climate records are flawed. Our approach addresses drought security explicitly by identifying approximate optimal solutions in which the system does not "run dry" in severe droughts with expected return periods up to a nominated (typically large) value. In addition, it is shown that failure to optimize the full mix of interacting operational and infrastructure decisions and to explore the trade-offs associated with sensitive constraints can lead to significantly more costly solutions.

  12. An optimization framework for measuring spatial access over healthcare networks.

    PubMed

    Li, Zihao; Serban, Nicoleta; Swann, Julie L

    2015-07-17

    Measurement of healthcare spatial access over a network involves accounting for demand, supply, and network structure. Popular approaches are based on floating catchment areas; however the methods can overestimate demand over the network and fail to capture cascading effects across the system. Optimization is presented as a framework to measure spatial access. Questions related to when and why optimization should be used are addressed. The accuracy of the optimization models compared to the two-step floating catchment area method and its variations is analytically demonstrated, and a case study of specialty care for Cystic Fibrosis over the continental United States is used to compare these approaches. The optimization models capture a patient's experience rather than their opportunities and avoid overestimating patient demand. They can also capture system effects due to change based on congestion. Furthermore, the optimization models provide more elements of access than traditional catchment methods. Optimization models can incorporate user choice and other variations, and they can be useful towards targeting interventions to improve access. They can be easily adapted to measure access for different types of patients, over different provider types, or with capacity constraints in the network. Moreover, optimization models allow differences in access in rural and urban areas.

  13. A robust optimization model for distribution and evacuation in the disaster response phase

    NASA Astrophysics Data System (ADS)

    Fereiduni, Meysam; Shahanaghi, Kamran

    2017-03-01

    Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.

  14. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less

  15. Statistical physics of hard combinatorial optimization: Vertex cover problem

    NASA Astrophysics Data System (ADS)

    Zhao, Jin-Hua; Zhou, Hai-Jun

    2014-07-01

    Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.

  16. An EGO-like optimization framework for sensor placement optimization in modal analysis

    NASA Astrophysics Data System (ADS)

    Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel

    2018-07-01

    In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.

  17. Enabling Incremental Query Re-Optimization.

    PubMed

    Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau

    2016-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.

  18. Enabling Incremental Query Re-Optimization

    PubMed Central

    Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau

    2017-01-01

    As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658

  19. Advances in adaptive control theory: Gradient- and derivative-free approaches

    NASA Astrophysics Data System (ADS)

    Yucelen, Tansel

    In this dissertation, we present new approaches to improve standard designs in adaptive control theory, and novel adaptive control architectures. We first present a novel Kalman filter based approach for approximately enforcing a linear constraint in standard adaptive control design. One application is that this leads to alternative forms for well known modification terms such as e-modification. In addition, it leads to smaller tracking errors without incurring significant oscillations in the system response and without requiring high modification gain. We derive alternative forms of e- and adaptive loop recovery (ALR-) modifications. Next, we show how to use Kalman filter optimization to derive a novel adaptation law. This results in an optimization-based time-varying adaptation gain that reduces the need for adaptation gain tuning. A second major contribution of this dissertation is the development of a novel derivative-free, delayed weight update law for adaptive control. The assumption of constant unknown ideal weights is relaxed to the existence of time-varying weights, such that fast and possibly discontinuous variation in weights are allowed. This approach is particulary advantageous for applications to systems that can undergo a sudden change in dynamics, such as might be due to reconfiguration, deployment of a payload, docking, or structural damage, and for rejection of external disturbance processes. As a third and final contribution, we develop a novel approach for extending all the methods developed in this dissertation to the case of output feedback. The approach is developed only for the case of derivative-free adaptive control, and the extension of the other approaches developed previously for the state feedback case to output feedback is left as a future research topic. The proposed approaches of this dissertation are illustrated in both simulation and flight test.

  20. Computational databases, pathway and cheminformatics tools for tuberculosis drug discovery

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Choi, Inhee; Sarker, Malabika; Talcott, Carolyn

    2010-01-01

    We are witnessing the growing menace of both increasing cases of drug-sensitive and drug-resistant Mycobacterium tuberculosis strains and the challenge to produce the first new tuberculosis (TB) drug in well over 40 years. The TB community, having invested in extensive high-throughput screening efforts, is faced with the question of how to optimally leverage this data in order to move from a hit to a lead to a clinical candidate and potentially a new drug. Complementing this approach, yet conducted on a much smaller scale, cheminformatic techniques have been leveraged and are herein reviewed. We suggest these computational approaches should be more optimally integrated in a workflow with experimental approaches to accelerate TB drug discovery. PMID:21129975

  1. Multi-probe-based resonance-frequency electrical impedance spectroscopy for detection of suspicious breast lesions: improving performance using partial ROC optimization

    NASA Astrophysics Data System (ADS)

    Lederman, Dror; Zheng, Bin; Wang, Xingwei; Wang, Xiao Hui; Gur, David

    2011-03-01

    We have developed a multi-probe resonance-frequency electrical impedance spectroscope (REIS) system to detect breast abnormalities. Based on assessing asymmetry in REIS signals acquired between left and right breasts, we developed several machine learning classifiers to classify younger women (i.e., under 50YO) into two groups of having high and low risk for developing breast cancer. In this study, we investigated a new method to optimize performance based on the area under a selected partial receiver operating characteristic (ROC) curve when optimizing an artificial neural network (ANN), and tested whether it could improve classification performance. From an ongoing prospective study, we selected a dataset of 174 cases for whom we have both REIS signals and diagnostic status verification. The dataset includes 66 "positive" cases recommended for biopsy due to detection of highly suspicious breast lesions and 108 "negative" cases determined by imaging based examinations. A set of REIS-based feature differences, extracted from the two breasts using a mirror-matched approach, was computed and constituted an initial feature pool. Using a leave-one-case-out cross-validation method, we applied a genetic algorithm (GA) to train the ANN with an optimal subset of features. Two optimization criteria were separately used in GA optimization, namely the area under the entire ROC curve (AUC) and the partial area under the ROC curve, up to a predetermined threshold (i.e., 90% specificity). The results showed that although the ANN optimized using the entire AUC yielded higher overall performance (AUC = 0.83 versus 0.76), the ANN optimized using the partial ROC area criterion achieved substantially higher operational performance (i.e., increasing sensitivity level from 28% to 48% at 95% specificity and/ or from 48% to 58% at 90% specificity).

  2. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  3. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  4. Duodenal and jejunal Dieulafoy’s lesions: optimal management

    PubMed Central

    Yılmaz, Tonguç Utku; Kozan, Ramazan

    2017-01-01

    Dieulafoy’s lesions (DLs) are rare and cause gastrointestinal bleeding resulting from erosion of dilated submucosal vessels. The most common location for DL is the stomach, followed by duodenum. There is little information about duodenal and jejunal DLs. Challenges for diagnosis and treatment of Dieulafoy’s lesions include the rare nature of the disease, asymptomatic patients, bleeding symptoms often requiring rapid diagnosis and treatment in symptomatic patients, variability in the diagnosis and treatment methods resulting from different lesion locations, and the risk of re-bleeding. For these reasons, there is no universal consensus about the diagnosis and treatment approach. There are few published case reports and case series recently published. Most duodenal DLs are not evaluated seperately in the studies, which makes it difficult to determine the optimal model. In this study, we summarize the general aspects and recent approaches used to treat duodenal DL. PMID:29158686

  5. A unified, multifidelity quasi-newton optimization method with application to aero-structural designa

    NASA Astrophysics Data System (ADS)

    Bryson, Dean Edward

    A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.

  6. A comprehensive formulation for volumetric modulated arc therapy planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dan; Lyu, Qihui; Ruan, Dan

    2016-07-15

    Purpose: Volumetric modulated arc therapy (VMAT) is a widely employed radiation therapy technique, showing comparable dosimetry to static beam intensity modulated radiation therapy (IMRT) with reduced monitor units and treatment time. However, the current VMAT optimization has various greedy heuristics employed for an empirical solution, which jeopardizes plan consistency and quality. The authors introduce a novel direct aperture optimization method for VMAT to overcome these limitations. Methods: The comprehensive VMAT (comVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term to penalize the difference between the optimized dose and the prescribed dose, as well as an anisotropicmore » total variation term to promote piecewise continuity in the fluence maps, preparing it for direct aperture optimization. A level set function was used to describe the aperture shapes and the difference between aperture shapes at adjacent angles was penalized to control MLC motion range. A proximal-class optimization solver was adopted to solve the large scale optimization problem, and an alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc comVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme case, a lung (LNG) case, and two head and neck cases—one with three PTVs (H&N{sub 3PTV}) and one with foue PTVs (H&N{sub 4PTV})—to test the efficacy. The plans were optimized using an alternating optimization strategy. The plans were compared against the clinical VMAT (clnVMAT) plans utilizing two overlapping coplanar arcs for treatment. Results: The optimization of the comVMAT plans had converged within 600 iterations of the block minimization algorithm. comVMAT plans were able to consistently reduce the dose to all organs-at-risk (OARs) as compared to the clnVMAT plans. On average, comVMAT plans reduced the max and mean OAR dose by 6.59% and 7.45%, respectively, of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N{sub 3PTV} case. PTV coverages measured by D95, D98, and D99 were within 0.25% of the prescription dose. By comprehensively optimizing all beams, the comVMAT optimizer gained the freedom to allow some selected beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel nongreedy VMAT approach simultaneously optimizes all beams in an arc and then directly generates deliverable apertures. The single arc VMAT approach thus fully utilizes the digital Linac’s capability in dose rate and gantry rotation speed modulation. In practice, the new single VMAT algorithm generates plans superior to existing VMAT algorithms utilizing two arcs.« less

  7. Integrative holism in psychiatric-mental health nursing.

    PubMed

    Zahourek, Rothlyn P

    2008-10-01

    In this era of high-tech care, many Americans seek more holistic approaches and alternative and complementary treatments for health problems, including mental illness. Psychiatric-mental health (PMH) nurses need to be aware of these approaches as they assess clients, maintain a holistic approach, and in some cases, provide skilled, specific modalities. This article reviews holistic philosophy and integrative approaches relevant to PMH nurses. The emphasis is that whichever modality PMH nurses practice, a holistic framework is essential for providing optimal PMH care.

  8. The economics of transboundary air pollution in Europe.

    PubMed

    Van Ierland, E C

    1991-01-01

    Acid rain is causing substantial damage in all Eastern and Western European countries. This article presents a stepwise linear optimisation model, that places transboundary air pollution by SO2 and NOx in a game theoretical framework. The national authorities of 28 countries are perceived as players in a game in which they can choose optimal strategies. It is illustrated that optimal national abatement programmes may be far from optimal if considered from an international point of view. Several scenarios are discussed, including a reference case, full cooperation, Pareto optimality and a critical loads approach. The need for international cooperation and regional differentiation of abatement programmes is emphasised.

  9. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  10. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  11. A kriging metamodel-assisted robust optimization method based on a reverse model

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  12. Optimal charges in lead progression: a structure-based neuraminidase case study.

    PubMed

    Armstrong, Kathryn A; Tidor, Bruce; Cheng, Alan C

    2006-04-20

    Collective experience in structure-based lead progression has found electrostatic interactions to be more difficult to optimize than shape-based ones. A major reason for this is that the net electrostatic contribution observed includes a significant nonintuitive desolvation component in addition to the more intuitive intermolecular interaction component. To investigate whether knowledge of the ligand optimal charge distribution can facilitate more intuitive design of electrostatic interactions, we took a series of small-molecule influenza neuraminidase inhibitors with known protein cocrystal structures and calculated the difference between the optimal and actual charge distributions. This difference from the electrostatic optimum correlates with the calculated electrostatic contribution to binding (r(2) = 0.94) despite small changes in binding modes caused by chemical substitutions, suggesting that the optimal charge distribution is a useful design goal. Furthermore, detailed suggestions for chemical modification generated by this approach are in many cases consistent with observed improvements in binding affinity, and the method appears to be useful despite discrete chemical constraints. Taken together, these results suggest that charge optimization is useful in facilitating generation of compound ideas in lead optimization. Our results also provide insight into design of neuraminidase inhibitors.

  13. Training Post-9/11 Police Officers with a Counter-Terrorism Reality-Based Training Model: A Case Study

    ERIC Educational Resources Information Center

    Biddle, Christopher J.

    2013-01-01

    The purpose of this qualitative holistic multiple-case study was to identify the optimal theoretical approach for a Counter-Terrorism Reality-Based Training (CTRBT) model to train post-9/11 police officers to perform effectively in their counter-terrorism assignments. Post-9/11 police officers assigned to counter-terrorism duties are not trained…

  14. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  15. Multi Sensor Fusion Using Fitness Adaptive Differential Evolution

    NASA Astrophysics Data System (ADS)

    Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam

    The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).

  16. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  17. Optimal sensors placement and spillover suppression

    NASA Astrophysics Data System (ADS)

    Hanis, Tomas; Hromcik, Martin

    2012-04-01

    A new approach to optimal placement of sensors (OSP) in mechanical structures is presented. In contrast to existing methods, the presented procedure enables a designer to seek for a trade-off between the presence of desirable modes in captured measurements and the elimination of influence of those mode shapes that are not of interest in a given situation. An efficient numerical algorithm is presented, developed from an existing routine based on the Fischer information matrix analysis. We consider two requirements in the optimal sensor placement procedure. On top of the classical EFI approach, the sensors configuration should also minimize spillover of unwanted higher modes. We use the information approach to OSP, based on the effective independent method (EFI), and modify the underlying criterion to meet both of our requirements—to maximize useful signals and minimize spillover of unwanted modes at the same time. Performance of our approach is demonstrated by means of examples, and a flexible Blended Wing Body (BWB) aircraft case study related to a running European-level FP7 research project 'ACFA 2020—Active Control for Flexible Aircraft'.

  18. Construction of joint confidence regions for the optimal true class fractions of Receiver Operating Characteristic (ROC) surfaces and manifolds.

    PubMed

    Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C

    2017-06-01

    The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.

  19. Enhanced genetic algorithm optimization model for a single reservoir operation based on hydropower generation: case study of Mosul reservoir, northern Iraq.

    PubMed

    Al-Aqeeli, Yousif H; Lee, T S; Abd Aziz, S

    2016-01-01

    Achievement of the optimal hydropower generation from operation of water reservoirs, is a complex problems. The purpose of this study was to formulate and improve an approach of a genetic algorithm optimization model (GAOM) in order to increase the maximization of annual hydropower generation for a single reservoir. For this purpose, two simulation algorithms were drafted and applied independently in that GAOM during 20 scenarios (years) for operation of Mosul reservoir, northern Iraq. The first algorithm was based on the traditional simulation of reservoir operation, whilst the second algorithm (Salg) enhanced the GAOM by changing the population values of GA through a new simulation process of reservoir operation. The performances of these two algorithms were evaluated through the comparison of their optimal values of annual hydropower generation during the 20 scenarios of operating. The GAOM achieved an increase in hydropower generation in 17 scenarios using these two algorithms, with the Salg being superior in all scenarios. All of these were done prior adding the evaporation (Ev) and precipitation (Pr) to the water balance equation. Next, the GAOM using the Salg was applied by taking into consideration the volumes of these two parameters. In this case, the optimal values obtained from the GAOM were compared, firstly with their counterpart that found using the same algorithm without taking into consideration of Ev and Pr, secondly with the observed values. The first comparison showed that the optimal values obtained in this case decreased in all scenarios, whilst maintaining the good results compared with the observed in the second comparison. The results proved the effectiveness of the Salg in increasing the hydropower generation through the enhanced approach of the GAOM. In addition, the results indicated to the importance of taking into account the Ev and Pr in the modelling of reservoirs operation.

  20. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  1. Individualized drug dosing using RBF-Galerkin method: Case of anemia management in chronic kidney disease.

    PubMed

    Mirinejad, Hossein; Gaweda, Adam E; Brier, Michael E; Zurada, Jacek M; Inanc, Tamer

    2017-09-01

    Anemia is a common comorbidity in patients with chronic kidney disease (CKD) and is frequently associated with decreased physical component of quality of life, as well as adverse cardiovascular events. Current treatment methods for renal anemia are mostly population-based approaches treating individual patients with a one-size-fits-all model. However, FDA recommendations stipulate individualized anemia treatment with precise control of the hemoglobin concentration and minimal drug utilization. In accordance with these recommendations, this work presents an individualized drug dosing approach to anemia management by leveraging the theory of optimal control. A Multiple Receding Horizon Control (MRHC) approach based on the RBF-Galerkin optimization method is proposed for individualized anemia management in CKD patients. Recently developed by the authors, the RBF-Galerkin method uses the radial basis function approximation along with the Galerkin error projection to solve constrained optimal control problems numerically. The proposed approach is applied to generate optimal dosing recommendations for individual patients. Performance of the proposed approach (MRHC) is compared in silico to that of a population-based anemia management protocol and an individualized multiple model predictive control method for two case scenarios: hemoglobin measurement with and without observational errors. In silico comparison indicates that hemoglobin concentration with MRHC method has less variation among the methods, especially in presence of measurement errors. In addition, the average achieved hemoglobin level from the MRHC is significantly closer to the target hemoglobin than that of the other two methods, according to the analysis of variance (ANOVA) statistical test. Furthermore, drug dosages recommended by the MRHC are more stable and accurate and reach the steady-state value notably faster than those generated by the other two methods. The proposed method is highly efficient for the control of hemoglobin level, yet provides accurate dosage adjustments in the treatment of CKD anemia. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Monte Carlo methods and their analysis for Coulomb collisions in multicomponent plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobylev, A.V., E-mail: alexander.bobylev@kau.se; Potapenko, I.F., E-mail: firena@yandex.ru

    2013-08-01

    Highlights: •A general approach to Monte Carlo methods for multicomponent plasmas is proposed. •We show numerical tests for the two-component (electrons and ions) case. •An optimal choice of parameters for speeding up the computations is discussed. •A rigorous estimate of the error of approximation is proved. -- Abstract: A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau–Fokker–Planck equations by Boltzmann equations of quasi-Maxwellian kind. It means that the total collision frequency for the corresponding Boltzmann equation does not depend on the velocities. This allows to make the simulation processmore » very simple since the collision pairs can be chosen arbitrarily, without restriction. It is shown that this approach includes the well-known methods of Takizuka and Abe (1977) [12] and Nanbu (1997) as particular cases, and generalizes the approach of Bobylev and Nanbu (2000). The numerical scheme of this paper is simpler than the schemes by Takizuka and Abe [12] and by Nanbu. We derive it for the general case of multicomponent plasmas and show some numerical tests for the two-component (electrons and ions) case. An optimal choice of parameters for speeding up the computations is also discussed. It is also proved that the order of approximation is not worse than O(√(ε)), where ε is a parameter of approximation being equivalent to the time step Δt in earlier methods. A similar estimate is obtained for the methods of Takizuka and Abe and Nanbu.« less

  3. Learning Biological Networks via Bootstrapping with Optimized GO-based Gene Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.

    2010-08-02

    Microarray gene expression data provide a unique information resource for learning biological networks using "reverse engineering" methods. However, there are a variety of cases in which we know which genes are involved in a given pathology of interest, but we do not have enough experimental evidence to support the use of fully-supervised/reverse-engineering learning methods. In this paper, we explore a novel semi-supervised approach in which biological networks are learned from a reference list of genes and a partial set of links for these genes extracted automatically from PubMed abstracts, using a knowledge-driven bootstrapping algorithm. We show how new relevant linksmore » across genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. We describe an application of this approach to the TGFB pathway as a case study and show how the ensuing results prove the feasibility of the approach as an alternate or complementary technique to fully supervised methods.« less

  4. A Large-Telescope Natural Guide Star AO System

    NASA Technical Reports Server (NTRS)

    Redding, David; Milman, Mark; Needels, Laura

    1994-01-01

    None given. From overview and conclusion:Keck Telescope case study. Objectives-low cost, good sky coverage. Approach--natural guide star at 0.8um, correcting at 2.2um.Concl- Good performance is possible for Keck with natural guide star AO system (SR>0.2 to mag 17+).AO-optimized CCD should b every effective. Optimizing td is very effective.Spatial Coadding is not effective except perhaps at extreme low light levels.

  5. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  7. A Neuroscience Approach to Optimizing Brain Resources for Human Performance in Extreme Environments

    PubMed Central

    Paulus, Martin P.; Potterat, Eric G.; Taylor, Marcus K.; Van Orden, Karl F.; Bauman, James; Momen, Nausheen; Padilla, Genieleah A.; Swain, Judith L.

    2009-01-01

    Extreme environments requiring optimal cognitive and behavioral performance occur in a wide variety of situations ranging from complex combat operations to elite athletic competitions. Although a large literature characterizes psychological and other aspects of individual differences in performances in extreme environments, virtually nothing is known about the underlying neural basis for these differences. This review summarizes the cognitive, emotional, and behavioral consequences of exposure to extreme environments, discusses predictors of performance, and builds a case for the use of neuroscience approaches to quantify and understand optimal cognitive and behavioral performance. Extreme environments are defined as an external context that exposes individuals to demanding psychological and/or physical conditions, and which may have profound effects on cognitive and behavioral performance. Examples of these types of environments include combat situations, Olympic-level competition, and expeditions in extreme cold, at high altitudes, or in space. Optimal performance is defined as the degree to which individuals achieve a desired outcome when completing goal-oriented tasks. It is hypothesized that individual variability with respect to optimal performance in extreme environments depends on a well “contextualized” internal body state that is associated with an appropriate potential to act. This hypothesis can be translated into an experimental approach that may be useful for quantifying the degree to which individuals are particularly suited to performing optimally in demanding environments. PMID:19447132

  8. Optimal Verification of Entangled States with Local Measurements

    NASA Astrophysics Data System (ADS)

    Pallister, Sam; Linden, Noah; Montanaro, Ashley

    2018-04-01

    Consider the task of verifying that a given quantum device, designed to produce a particular entangled state, does indeed produce that state. One natural approach would be to characterize the output state by quantum state tomography, or alternatively, to perform some kind of Bell test, tailored to the state of interest. We show here that neither approach is optimal among local verification strategies for 2-qubit states. We find the optimal strategy in this case and show that quadratically fewer total measurements are needed to verify to within a given fidelity than in published results for quantum state tomography, Bell test, or fidelity estimation protocols. We also give efficient verification protocols for any stabilizer state. Additionally, we show that requiring that the strategy be constructed from local, nonadaptive, and noncollective measurements only incurs a constant-factor penalty over a strategy without these restrictions.

  9. 3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.

    2017-04-01

    Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.

  10. Efficient amplitude-modulated pulses for triple- to single-quantum coherence conversion in MQMAS NMR.

    PubMed

    Colaux, Henri; Dawson, Daniel M; Ashbrook, Sharon E

    2014-08-07

    The conversion between multiple- and single-quantum coherences is integral to many nuclear magnetic resonance (NMR) experiments of quadrupolar nuclei. This conversion is relatively inefficient when effected by a single pulse, and many composite pulse schemes have been developed to improve this efficiency. To provide the maximum improvement, such schemes typically require time-consuming experimental optimization. Here, we demonstrate an approach for generating amplitude-modulated pulses to enhance the efficiency of the triple- to single-quantum conversion. The optimization is performed using the SIMPSON and MATLAB packages and results in efficient pulses that can be used without experimental reoptimisation. Most significant signal enhancements are obtained when good estimates of the inherent radio-frequency nutation rate and the magnitude of the quadrupolar coupling are used as input to the optimization, but the pulses appear robust to reasonable variations in either parameter, producing significant enhancements compared to a single-pulse conversion, and also comparable or improved efficiency over other commonly used approaches. In all cases, the ease of implementation of our method is advantageous, particularly for cases with low sensitivity, where the improvement is most needed (e.g., low gyromagnetic ratio or high quadrupolar coupling). Our approach offers the potential to routinely improve the sensitivity of high-resolution NMR spectra of nuclei and systems that would, perhaps, otherwise be deemed "too challenging".

  11. Efficient Amplitude-Modulated Pulses for Triple- to Single-Quantum Coherence Conversion in MQMAS NMR

    PubMed Central

    2014-01-01

    The conversion between multiple- and single-quantum coherences is integral to many nuclear magnetic resonance (NMR) experiments of quadrupolar nuclei. This conversion is relatively inefficient when effected by a single pulse, and many composite pulse schemes have been developed to improve this efficiency. To provide the maximum improvement, such schemes typically require time-consuming experimental optimization. Here, we demonstrate an approach for generating amplitude-modulated pulses to enhance the efficiency of the triple- to single-quantum conversion. The optimization is performed using the SIMPSON and MATLAB packages and results in efficient pulses that can be used without experimental reoptimisation. Most significant signal enhancements are obtained when good estimates of the inherent radio-frequency nutation rate and the magnitude of the quadrupolar coupling are used as input to the optimization, but the pulses appear robust to reasonable variations in either parameter, producing significant enhancements compared to a single-pulse conversion, and also comparable or improved efficiency over other commonly used approaches. In all cases, the ease of implementation of our method is advantageous, particularly for cases with low sensitivity, where the improvement is most needed (e.g., low gyromagnetic ratio or high quadrupolar coupling). Our approach offers the potential to routinely improve the sensitivity of high-resolution NMR spectra of nuclei and systems that would, perhaps, otherwise be deemed “too challenging”. PMID:25047226

  12. Optimization of noise in non-integrated instrumentation amplifier for the amplification of very low electrophysiological [corrected] signals. Case of electro cardio graphic signals (ECG).

    PubMed

    Ngounou, Guy Merlin; Kom, Martin

    2014-12-01

    In this paper we present an instrumentation amplifier with discrete elements and optimized noise for the amplification of very low signals. In amplifying signals of very weak amplitude, the noise can completely absorb these signals if the used amplifier does not present the optimal guarantee to minimize the noise. Based on related research and re-viewing of recent patents Journal of Medical Systems, 30:205-209, 2006, we suggest an approach of noise reduction in amplification much more thoroughly than re-viewing of recent patents and we deduce from it the general criteria necessary and essential to achieve this optimization. The comparison of these criteria with the provisions adopted in practice leads to the inadequacy of conventional amplifiers for effective noise reduction. The amplifier we propose is an instrumentation amplifier with active negative feedback and optimized noise for the amplification of signals with very low amplitude. The application of this method in the case of electro cardio graphic signals (ECG) provides simulation results fully in line with forecasts.

  13. Promoting Affordability in Defense Acquisitions: A Multi-Period Portfolio Approach

    DTIC Science & Technology

    2014-04-30

    has evolved out of many areas of research, ranging from economics to modern control theory (Powell, 2011). The general form of a dynamic programming...states 5 School of Aeronautics & Astronautics A Portfolio Approach: Background • Balance expected profit (performance) against risk ( variance ) in...investments (Markowitz 1952) • Efficiency frontier of optimal portfolios given investor risk averseness • Extends to multi-period case with various

  14. Optimal design of zero-water discharge rinsing systems.

    PubMed

    Thöming, Jorg

    2002-03-01

    This paper is about zero liquid discharge in processes that use water for rinsing. Emphasis was given to those systems that contaminate process water with valuable process liquor and compounds. The approach involved the synthesis of optimal rinsing and recycling networks (RRN) that had a priori excluded water discharge. The total annualized costs of the RRN were minimized by the use of a mixed-integer nonlinear program (MINLP). This MINLP was based on a hyperstructure of the RRN and contained eight counterflow rinsing stages and three regenerator units: electrodialysis, reverse osmosis, and ion exchange columns. A "large-scale nickel plating process" case study showed that by means of zero-water discharge and optimized rinsing the total waste could be reduced by 90.4% at a revenue of $448,000/yr. Furthermore, with the optimized RRN, the rinsing performance can be improved significantly at a low-cost increase. In all the cases, the amount of valuable compounds reclaimed was above 99%.

  15. Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation

    NASA Astrophysics Data System (ADS)

    Choi, J.; Raguin, L. G.

    2010-10-01

    Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.

  16. Optimal design of implants for magnetically mediated hyperthermia: A wireless power transfer approach

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-09-01

    In magnetically mediated hyperthermia (MMH), an externally applied alternating magnetic field interacts with a mediator (such as a magnetic nanoparticle or an implant) inside the body to heat up the tissue in its proximity. Producing heat via induced currents in this manner is strikingly similar to wireless power transfer (WPT) for implants, where power is transferred from a transmitter outside of the body to an implanted receiver, in most cases via magnetic fields as well. Leveraging this analogy, a systematic method to design MMH implants for optimal heating efficiency is introduced, akin to the design of WPT systems for optimal power transfer efficiency. This paper provides analytical formulas for the achievable heating efficiency bounds as well as the optimal operating frequency and the implant material. Multiphysics simulations validate the approach and further demonstrate that optimization with respect to maximum heating efficiency is accompanied by minimizing heat delivery to healthy tissue. This is a property that is highly desirable when considering MMH as a key component or complementary method of cancer treatment and other applications.

  17. On process optimization considering LCA methodology.

    PubMed

    Pieragostini, Carla; Mussati, Miguel C; Aguirre, Pío

    2012-04-15

    The goal of this work is to research the state-of-the-art in process optimization techniques and tools based on LCA, focused in the process engineering field. A collection of methods, approaches, applications, specific software packages, and insights regarding experiences and progress made in applying the LCA methodology coupled to optimization frameworks is provided, and general trends are identified. The "cradle-to-gate" concept to define the system boundaries is the most used approach in practice, instead of the "cradle-to-grave" approach. Normally, the relationship between inventory data and impact category indicators is linearly expressed by the characterization factors; then, synergic effects of the contaminants are neglected. Among the LCIA methods, the eco-indicator 99, which is based on the endpoint category and the panel method, is the most used in practice. A single environmental impact function, resulting from the aggregation of environmental impacts, is formulated as the environmental objective in most analyzed cases. SimaPro is the most used software for LCA applications in literature analyzed. The multi-objective optimization is the most used approach for dealing with this kind of problems, where the ε-constraint method for generating the Pareto set is the most applied technique. However, a renewed interest in formulating a single economic objective function in optimization frameworks can be observed, favored by the development of life cycle cost software and progress made in assessing costs of environmental externalities. Finally, a trend to deal with multi-period scenarios into integrated LCA-optimization frameworks can be distinguished providing more accurate results upon data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Estimating Origin-Destination Matrices Using AN Efficient Moth Flame-Based Spatial Clustering Approach

    NASA Astrophysics Data System (ADS)

    Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali

    2017-09-01

    Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  19. Concurrent tailoring of fabrication process and interphase layer to reduce residual stresses in metal matrix composites

    NASA Technical Reports Server (NTRS)

    Saravanos, D. A.; Chamis, C. C.; Morel, M.

    1991-01-01

    A methodology is presented to reduce the residual matrix stresses in continuous fiber metal matrix composites (MMC) by optimizing the fabrication process and interphase layer characteristics. The response of the fabricated MMC was simulated based on nonlinear micromechanics. Application cases include fabrication tailoring, interphase tailoring, and concurrent fabrication-interphase optimization. Two composite systems, silicon carbide/titanium and graphite/copper, are considered. Results illustrate the merits of each approach, indicate that concurrent fabrication/interphase optimization produces significant reductions in the matrix residual stresses and demonstrate the strong coupling between fabrication and interphase tailoring.

  20. Replica Analysis for Portfolio Optimization with Single-Factor Model

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  1. Optimal causal inference: estimating stored information and approximating causal architecture.

    PubMed

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  2. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  3. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  4. Efficient Gradient-Based Shape Optimization Methodology Using Inviscid/Viscous CFD

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1997-01-01

    The formerly developed preconditioned-biconjugate-gradient (PBCG) solvers for the analysis and the sensitivity equations had resulted in very large error reductions per iteration; quadratic convergence was achieved whenever the solution entered the domain of attraction to the root. Its memory requirement was also lower as compared to a direct inversion solver. However, this memory requirement was high enough to preclude the realistic, high grid-density design of a practical 3D geometry. This limitation served as the impetus to the first-year activity (March 9, 1995 to March 8, 1996). Therefore, the major activity for this period was the development of the low-memory methodology for the discrete-sensitivity-based shape optimization. This was accomplished by solving all the resulting sets of equations using an alternating-direction-implicit (ADI) approach. The results indicated that shape optimization problems which required large numbers of grid points could be resolved with a gradient-based approach. Therefore, to better utilize the computational resources, it was recommended that a number of coarse grid cases, using the PBCG method, should initially be conducted to better define the optimization problem and the design space, and obtain an improved initial shape. Subsequently, a fine grid shape optimization, which necessitates using the ADI method, should be conducted to accurately obtain the final optimized shape. The other activity during this period was the interaction with the members of the Aerodynamic and Aeroacoustic Methods Branch of Langley Research Center during one stage of their investigation to develop an adjoint-variable sensitivity method using the viscous flow equations. This method had algorithmic similarities to the variational sensitivity methods and the control-theory approach. However, unlike the prior studies, it was considered for the three-dimensional, viscous flow equations. The major accomplishment in the second period of this project (March 9, 1996 to March 8, 1997) was the extension of the shape optimization methodology for the Thin-Layer Navier-Stokes equations. Both the Euler-based and the TLNS-based analyses compared with the analyses obtained using the CFL3D code. The sensitivities, again from both levels of the flow equations, also compared very well with the finite-differenced sensitivities. A fairly large set of shape optimization cases were conducted to study a number of issues previously not well understood. The testbed for these cases was the shaping of an arrow wing in Mach 2.4 flow. All the final shapes, obtained either from a coarse-grid-based or a fine-grid-based optimization, using either a Euler-based or a TLNS-based analysis, were all re-analyzed using a fine-grid, TLNS solution for their function evaluations. This allowed for a more fair comparison of their relative merits. From the aerodynamic performance standpoint, the fine-grid TLNS-based optimization produced the best shape, and the fine-grid Euler-based optimization produced the lowest cruise efficiency.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Shaughnessy, Eric; Ardani, Kristen; Cutler, Dylan

    Solar 'plus' refers to an emerging approach to distributed solar photovoltaic (PV) deployment that uses energy storage and controllable devices to optimize customer economics. The solar plus approach increases customer system value through technologies such as electric batteries, smart domestic water heaters, smart air-conditioner (AC) units, and electric vehicles We use an NREL optimization model to explore the customer-side economics of solar plus under various utility rate structures and net metering rates. We explore optimal solar plus applications in five case studies with different net metering rates and rate structures. The model deploys different configurations of PV, batteries, smart domesticmore » water heaters, and smart AC units in response to different rate structures and customer load profiles. The results indicate that solar plus improves the customer economics of PV and may mitigate some of the negative impacts of evolving rate structures on PV economics. Solar plus may become an increasingly viable model for optimizing PV customer economics in an evolving rate environment.« less

  6. Optimal design approach for heating irregular-shaped objects in three-dimensional radiant furnaces using a hybrid genetic algorithm-artificial neural network method

    NASA Astrophysics Data System (ADS)

    Darvishvand, Leila; Kamkari, Babak; Kowsary, Farshad

    2018-03-01

    In this article, a new hybrid method based on the combination of the genetic algorithm (GA) and artificial neural network (ANN) is developed to optimize the design of three-dimensional (3-D) radiant furnaces. A 3-D irregular shape design body (DB) heated inside a 3-D radiant furnace is considered as a case study. The uniform thermal conditions on the DB surfaces are obtained by minimizing an objective function. An ANN is developed to predict the objective function value which is trained through the data produced by applying the Monte Carlo method. The trained ANN is used in conjunction with the GA to find the optimal design variables. The results show that the computational time using the GA-ANN approach is significantly less than that of the conventional method. It is concluded that the integration of the ANN with GA is an efficient technique for optimization of the radiant furnaces.

  7. A Novel Computer-Aided Approach for Parametric Investigation of Custom Design of Fracture Fixation Plates.

    PubMed

    Chen, Xiaozhong; He, Kunjin; Chen, Zhengming

    2017-01-01

    The present study proposes an integrated computer-aided approach combining femur surface modeling, fracture evidence recover plate creation, and plate modification in order to conduct a parametric investigation of the design of custom plate for a specific patient. The study allows for improving the design efficiency of specific plates on the patients' femur parameters and the fracture information. Furthermore, the present approach will lead to exploration of plate modification and optimization. The three-dimensional (3D) surface model of a detailed femur and the corresponding fixation plate were represented with high-level feature parameters, and the shape of the specific plate was recursively modified in order to obtain the optimal plate for a specific patient. The proposed approach was tested and verified on a case study, and it could be helpful for orthopedic surgeons to design and modify the plate in order to fit the specific femur anatomy and the fracture information.

  8. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    NASA Astrophysics Data System (ADS)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  9. Providers' Perspectives on Case Management of a Healthy Start Program: A Qualitative Study

    PubMed Central

    Moise, Imelda K.; Mulhall, Peter F.

    2016-01-01

    Although Healthy Start case managers recognized the benefits of case management for facilitating optimal service delivery to women and their families, structural factors impact effective implementation. This study investigated case managers' views of 1) the structural challenges faced in implementing case management for program participants, and 2) possible strategies to enhance case management in medical home settings. Two focus groups were conducted separately with case managers from the four program service sites to gain insight into these issues noted above. Each group was co-facilitated by two evaluators using a previously developed semi-structured interview guide. The group discussions were audio recorded and the case managers' comments were transcribed verbatim. Transcripts were analyzed using thematic analysis, a deductive approach. Data were collected in 2013 and analyzed in 2015. Case managers are challenged by externalities (demographic shifts in target populations, poverty); contractual requirements (predefined catchment neighborhoods, caseload); limited support (client incentives, tailored training, and a high staff turnover rate); and logistic difficulties (organizational issues). Their approach to case management tends to be focused on linking Although Healthy Start case managers recognized the benefits of case management for facilitating optimal service delivery to women and their families, structural factors impact effective implementation. This study investigated case managers' views of 1) the structural challenges faced in implementing case management for program participants, and 2) possible strategies to enhance case management in medical home settings. Two focus groups were conducted separately with case managers from the four program service sites to gain insight into these issues noted above. Each group was co-facilitated by two evaluators using a previously developed semi-structured interview guide. The group discussions were audio recorded and the case managers' comments were transcribed verbatim. Transcripts were analyzed using thematic analysis, a deductive approach. Data were collected in 2013 and analyzed in 2015. Case managers are challenged by externalities (demographic shifts in target populations, poverty); contractual requirements (predefined catchment neighborhoods, caseload); limited support (client incentives, tailored training, and a high staff turnover rate); and logistic difficulties (organizational issues). Their approach to case management tends to be focused on linking clients to adequate services rather than reporting performance. Case managers favored measurable deliverables rather than operational work products. A proposed solution to current challenges emphasizes and encourages the iterative learning process and shared decision making between program targets, funders and providers. Case managers are aware of the challenging environment in which they operate for their clients and for themselves. However, future interventions will require clearly identified performance measures and increased systems support. PMID:27149061

  10. An optimal control approach to the design of moving flight simulators

    NASA Technical Reports Server (NTRS)

    Sivan, R.; Ish-Shalom, J.; Huang, J.-K.

    1982-01-01

    An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.

  11. A comprehensive approach to reactive power scheduling in restructured power systems

    NASA Astrophysics Data System (ADS)

    Shukla, Meera

    Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.

  12. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  13. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  14. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  15. Optimizing α for better statistical decisions: a case study involving the pace-of-life syndrome hypothesis: optimal α levels set to minimize Type I and II errors frequently result in different conclusions from those using α = 0.05.

    PubMed

    Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E

    2012-12-01

    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.

  16. Optimal Force Control of Vibro-Impact Systems for Autonomous Drilling Applications

    NASA Technical Reports Server (NTRS)

    Aldrich, Jack B.; Okon, Avi B.

    2012-01-01

    The need to maintain optimal energy efficiency is critical during the drilling operations performed on future and current planetary rover missions (see figure). Specifically, this innovation seeks to solve the following problem. Given a spring-loaded percussive drill driven by a voice-coil motor, one needs to determine the optimal input voltage waveform (periodic function) and the optimal hammering period that minimizes the dissipated energy, while ensuring that the hammer-to-rock impacts are made with sufficient (user-defined) impact velocity (or impact energy). To solve this problem, it was first observed that when voice-coil-actuated percussive drills are driven at high power, it is of paramount importance to ensure that the electrical current of the device remains in phase with the velocity of the hammer. Otherwise, negative work is performed and the drill experiences a loss of performance (i.e., reduced impact energy) and an increase in Joule heating (i.e., reduction in energy efficiency). This observation has motivated many drilling products to incorporate the standard bang-bang control approach for driving their percussive drills. However, the bang-bang control approach is significantly less efficient than the optimal energy-efficient control approach solved herein. To obtain this solution, the standard tools of classical optimal control theory were applied. It is worth noting that these tools inherently require the solution of a two-point boundary value problem (TPBVP), i.e., a system of differential equations where half the equations have unknown boundary conditions. Typically, the TPBVP is impossible to solve analytically for high-dimensional dynamic systems. However, for the case of the spring-loaded vibro-impactor, this approach yields the exact optimal control solution as the sum of four analytic functions whose coefficients are determined using a simple, easy-to-implement algorithm. Once the optimal control waveform is determined, it can be used optimally in the context of both open-loop and closed-loop control modes (using standard realtime control hardware).

  17. Automating Initial Guess Generation for High Fidelity Trajectory Optimization Tools

    NASA Technical Reports Server (NTRS)

    Villa, Benjamin; Lantoine, Gregory; Sims, Jon; Whiffen, Gregory

    2013-01-01

    Many academic studies in spaceflight dynamics rely on simplified dynamical models, such as restricted three-body models or averaged forms of the equations of motion of an orbiter. In practice, the end result of these preliminary orbit studies needs to be transformed into more realistic models, in particular to generate good initial guesses for high-fidelity trajectory optimization tools like Mystic. This paper reviews and extends some of the approaches used in the literature to perform such a task, and explores the inherent trade-offs of such a transformation with a view toward automating it for the case of ballistic arcs. Sample test cases in the libration point regimes and small body orbiter transfers are presented.

  18. Global dynamic optimization approach to predict activation in metabolic pathways.

    PubMed

    de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R

    2014-01-06

    During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.

  19. Integrated configurable equipment selection and line balancing for mass production with serial-parallel machining systems

    NASA Astrophysics Data System (ADS)

    Battaïa, Olga; Dolgui, Alexandre; Guschinsky, Nikolai; Levin, Genrikh

    2014-10-01

    Solving equipment selection and line balancing problems together allows better line configurations to be reached and avoids local optimal solutions. This article considers jointly these two decision problems for mass production lines with serial-parallel workplaces. This study was motivated by the design of production lines based on machines with rotary or mobile tables. Nevertheless, the results are more general and can be applied to assembly and production lines with similar structures. The designers' objectives and the constraints are studied in order to suggest a relevant mathematical model and an efficient optimization approach to solve it. A real case study is used to validate the model and the developed approach.

  20. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  1. Case study: technology initiative led to advanced lead optimization screening processes at Bristol-Myers Squibb, 2004-2009.

    PubMed

    Zhang, Litao; Cvijic, Mary Ellen; Lippy, Jonathan; Myslik, James; Brenner, Stephen L; Binnie, Alastair; Houston, John G

    2012-07-01

    In this paper, we review the key solutions that enabled evolution of the lead optimization screening support process at Bristol-Myers Squibb (BMS) between 2004 and 2009. During this time, technology infrastructure investment and scientific expertise integration laid the foundations to build and tailor lead optimization screening support models across all therapeutic groups at BMS. Together, harnessing advanced screening technology platforms and expanding panel screening strategy led to a paradigm shift at BMS in supporting lead optimization screening capability. Parallel SAR and structure liability relationship (SLR) screening approaches were first and broadly introduced to empower more-rapid and -informed decisions about chemical synthesis strategy and to broaden options for identifying high-quality drug candidates during lead optimization. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans can help improve dose conformity, homogeneity, and organ sparing simultaneously using the same beam trajectory length and delivery time as a coplanar VMAT plan.

  3. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    NASA Astrophysics Data System (ADS)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  4. A dynamic feedback-control toll pricing methodology : a case study on Interstate 95 managed lanes.

    DOT National Transportation Integrated Search

    2013-06-01

    Recently, congestion pricing emerged as a cost-effective and efficient strategy to mitigate the congestion problem on freeways. This study develops a feedback-control based dynamic toll approach to formulate and solve for optimal tolls. The study com...

  5. Optimization of wastewater treatment alternative selection by hierarchy grey relational analysis.

    PubMed

    Zeng, Guangming; Jiang, Ru; Huang, Guohe; Xu, Min; Li, Jianbing

    2007-01-01

    This paper describes an innovative systematic approach, namely hierarchy grey relational analysis for optimal selection of wastewater treatment alternatives, based on the application of analytic hierarchy process (AHP) and grey relational analysis (GRA). It can be applied for complicated multicriteria decision-making to obtain scientific and reasonable results. The effectiveness of this approach was verified through a real case study. Four wastewater treatment alternatives (A(2)/O, triple oxidation ditch, anaerobic single oxidation ditch and SBR) were evaluated and compared against multiple economic, technical and administrative performance criteria, including capital cost, operation and maintenance (O and M) cost, land area, removal of nitrogenous and phosphorous pollutants, sludge disposal effect, stability of plant operation, maturity of technology and professional skills required for O and M. The result illustrated that the anaerobic single oxidation ditch was the optimal scheme and would obtain the maximum general benefits for the wastewater treatment plant to be constructed.

  6. Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.

    PubMed

    Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo

    2016-09-01

    In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.

  7. An efficient approach for inverse kinematics and redundancy resolution scheme of hyper-redundant manipulators

    NASA Astrophysics Data System (ADS)

    Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar

    2018-04-01

    Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.

  8. Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Vadali, Srinivas R.; Markley, F. Landis

    1999-01-01

    An optimal control approach using variable-structure (sliding-mode) tracking for large angle spacecraft maneuvers is presented. The approach expands upon a previously derived regulation result using a quaternion parameterization for the kinematic equations of motion. This parameterization is used since it is free of singularities. The main contribution of this paper is the utilization of a simple term in the control law that produces a maneuver to the reference attitude trajectory in the shortest distance. Also, a multiplicative error quaternion between the desired and actual attitude is used to derive the control law. Sliding-mode switching surfaces are derived using an optimal-control analysis. Control laws are given using either external torque commands or reaction wheel commands. Global asymptotic stability is shown for both cases using a Lyapunov analysis. Simulation results are shown which use the new control strategy to stabilize the motion of the Microwave Anisotropy Probe spacecraft.

  9. Parallel optimization algorithm for drone inspection in the building industry

    NASA Astrophysics Data System (ADS)

    Walczyński, Maciej; BoŻejko, Wojciech; Skorupka, Dariusz

    2017-07-01

    In this paper we present an approach for Vehicle Routing Problem with Drones (VRPD) in case of building inspection from the air. In autonomic inspection process there is a need to determine of the optimal route for inspection drone. This is especially important issue because of the very limited flight time of modern multicopters. The method of determining solutions for Traveling Salesman Problem(TSP), described in this paper bases on Parallel Evolutionary Algorithm (ParEA)with cooperative and independent approach for communication between threads. This method described first by Bożejko and Wodecki [1] bases on the observation that if exists some number of elements on certain positions in a number of permutations which are local minima, then those elements will be in the same position in the optimal solution for TSP problem. Numerical experiments were made on BEM computational cluster with using MPI library.

  10. Combinatorial approaches to gene recognition.

    PubMed

    Roytberg, M A; Astakhova, T V; Gelfand, M S

    1997-01-01

    Recognition of genes via exon assembly approaches leads naturally to the use of dynamic programming. We consider the general graph-theoretical formulation of the exon assembly problem and analyze in detail some specific variants: multicriterial optimization in the case of non-linear gene-scoring functions; context-dependent schemes for scoring exons and related procedures for exon filtering; and highly specific recognition of arbitrary gene segments, oligonucleotide probes and polymerase chain reaction (PCR) primers.

  11. Variable structure control of spacecraft reorientation maneuvers

    NASA Technical Reports Server (NTRS)

    Sira-Ramirez, H.; Dwyer, T. A. W., III

    1986-01-01

    A Variable Structure Control (VSC) approach is presented for multi-axial spacecraft reorientation maneuvers. A nonlinear sliding surface is proposed which results in an asymptotically stable, ideal linear sliding motion of Cayley-Rodriques attitude parameters. By imposing a desired equivalent dynamics on the attitude parameters, the approach is devoid of optimal control considerations. The single axis case provides a design scheme for the multiple axes design problem. Illustrative examples are presented.

  12. An algorithm for the optimal collection of wet waste.

    PubMed

    Laureri, Federica; Minciardi, Riccardo; Robba, Michela

    2016-02-01

    This work refers to the development of an approach for planning wet waste (food waste and other) collection at a metropolitan scale. Some specific modeling features distinguish this specific waste collection problem from the other ones. For instance, there may be significant differences as regards the values of the parameters (such as weight and volume) characterizing the various collection points. As it happens for classical waste collection planning, even in the case of wet waste, one has to deal with difficult combinatorial problems, where the determination of an optimal solution may require a very large computational effort, in the case of problem instances having a noticeable dimensionality. For this reason, in this work, a heuristic procedure for the optimal planning of wet waste is developed and applied to problem instances drawn from a real case study. The performances that can be obtained by applying such a procedure are evaluated by a comparison with those obtainable via a general-purpose mathematical programming software package, as well as those obtained by applying very simple decision rules commonly used in practice. The considered case study consists in an area corresponding to the historical center of the Municipality of Genoa. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Thermodynamic efficiency limits of classical and bifacial multi-junction tandem solar cells: An analytical approach

    NASA Astrophysics Data System (ADS)

    Alam, Muhammad Ashraful; Khan, M. Ryyan

    2016-10-01

    Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.

  14. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  15. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  16. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  17. From properties to materials: An efficient and simple approach.

    PubMed

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-21

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  18. From properties to materials: An efficient and simple approach

    NASA Astrophysics Data System (ADS)

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-01

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  19. An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.

    PubMed

    Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P

    2012-01-01

    The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example for an extreme concave case. Quadratic programming is an alternative approach for inverse planning which generates clinically satisfying plans in comparison to the clinical system and constitutes an efficient optimization process characterized by uniqueness and reproducibility of the solution.

  20. Quality by design approach for developing chitosan-Ca-alginate microspheres for colon delivery of celecoxib-hydroxypropyl-β-cyclodextrin-PVP complex.

    PubMed

    Mennini, N; Furlanetto, S; Cirri, M; Mura, P

    2012-01-01

    The aim of the present work was to develop a new multiparticulate system, designed for colon-specific delivery of celecoxib for both systemic (in chronotherapic treatment of arthritis) and local (in prophylaxis of colon carcinogenesis) therapy. The system simultaneously benefits from ternary complexation with hydroxypropyl-β-cyclodextrin and PVP (polyvinylpyrrolidone), to increase drug solubility, and vectorization in chitosan-Ca-alginate microspheres, to exploit the colon-specific carrier properties of these polymers. Statistical experimental design was employed to investigate the combined effect of four formulation variables, i.e., % of alginate, CaCl₂, and chitosan and time of cross-linking on microsphere entrapment efficiency (EE%) and drug amount released after 4h in colonic medium, considered as the responses to be optimized. Design of experiment was used in the context of Quality by Design, which requires a multivariate approach for understanding the multifactorial relationships among formulation parameters. Doehlert design allowed for defining a design space, which revealed that variations of the considered factors had in most cases an opposite influence on the responses. Desirability function was used to attain simultaneous optimization of both responses. The desired goals were achieved for both systemic and local use of celecoxib. Experimental values obtained from the optimized formulations were in both cases very close to the predicted values, thus confirming the validity of the generated mathematical model. These results demonstrated the effectiveness of the proposed jointed use of drug cyclodextrin complexation and chitosan-Ca-alginate microsphere vectorization, as well as the usefulness of the multivariate approach for the preparation of colon-targeted celecoxib microspheres with optimized properties. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Optimal actuator placement in adaptive precision trusses

    NASA Technical Reports Server (NTRS)

    Baycan, C. M.; Utku, S.; Das, S. K.; Wada, B. K.

    1992-01-01

    Actuator placement in adaptive truss structures is to cater to two needs: displacement control of precision points and preloading the elements to overcome joint slackness. Due to technological and financial considerations, the number of actuators available is much less than the degrees of freedom of precision points to be controlled and the degree of redundancy of the structure. An approach for optimal actuator location is outlined. Test cases to demonstrate the effectiveness of the scheme are applied to the Precision Segmented Reflector Truss.

  2. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  3. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  4. Identifying strategic sites for Green-Infrastructures (GI) to manage stormwater in a miscellaneous use urban African watershed

    NASA Astrophysics Data System (ADS)

    Selker, J. S.; Kahsai, S. K.

    2017-12-01

    Green Infrastructure (GI) or Low impact development (LID), is a land use planning and design approach with the objective of mitigating land development impacts to the environment, and is ever more looked to as a way to lessen runoff and pollutant loading to receiving water bodies. Broad-scale approaches for siting GI/LID have been developed for agricultural watersheds, but are rare for urban watersheds, largely due to greater land use complexity. And it is even more challenging when it comes to Urban Africa due to the combination of poor data quality, rapid and unplanned development, and civic institutions unable to reliably carry out regular maintenance. We present a spacio-temporal simulation-based approach to identify an optimal prioritization of sites for GI/LID based on DEM, land use and land cover. Optimization used is a multi-objective optimization tool along with an urban storm water management model (SWMM) to identify the most cost-effective combination of LID/GI. This was applied to an urban watershed in NW Kampala, Lubigi Catchment (notorious for being heavily flooded every year), with a miscellaneous use watershed in Uganda, as a case-study to demonstrate the approach.

  5. Web-GIS oriented systems viability for municipal solid waste selective collection optimization in developed and transient economies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rada, E.C., E-mail: Elena.Rada@ing.unitn.it; Ragazzi, M.; Fedrizzi, P.

    Highlights: ► As an appropriate solution for MSW management in developed and transient countries. ► As an option to increase the efficiency of MSW selective collection. ► As an opportunity to integrate MSW management needs and services inventories. ► As a tool to develop Urban Mining actions. - Abstract: Municipal solid waste management is a multidisciplinary activity that includes generation, source separation, storage, collection, transfer and transport, processing and recovery, and, last but not least, disposal. The optimization of waste collection, through source separation, is compulsory where a landfill based management must be overcome. In this paper, a few aspectsmore » related to the implementation of a Web-GIS based system are analyzed. This approach is critically analyzed referring to the experience of two Italian case studies and two additional extra-European case studies. The first case is one of the best examples of selective collection optimization in Italy. The obtained efficiency is very high: 80% of waste is source separated for recycling purposes. In the second reference case, the local administration is going to be faced with the optimization of waste collection through Web-GIS oriented technologies for the first time. The starting scenario is far from an optimized management of municipal solid waste. The last two case studies concern pilot experiences in China and Malaysia. Each step of the Web-GIS oriented strategy is comparatively discussed referring to typical scenarios of developed and transient economies. The main result is that transient economies are ready to move toward Web oriented tools for MSW management, but this opportunity is not yet well exploited in the sector.« less

  6. Deep ECGNet: An Optimal Deep Learning Framework for Monitoring Mental Stress Using Ultra Short-Term ECG Signals.

    PubMed

    Hwang, Bosun; You, Jiwoo; Vaessen, Thomas; Myin-Germeys, Inez; Park, Cheolsoo; Zhang, Byoung-Tak

    2018-02-08

    Stress recognition using electrocardiogram (ECG) signals requires the intractable long-term heart rate variability (HRV) parameter extraction process. This study proposes a novel deep learning framework to recognize the stressful states, the Deep ECGNet, using ultra short-term raw ECG signals without any feature engineering methods. The Deep ECGNet was developed through various experiments and analysis of ECG waveforms. We proposed the optimal recurrent and convolutional neural networks architecture, and also the optimal convolution filter length (related to the P, Q, R, S, and T wave durations of ECG) and pooling length (related to the heart beat period) based on the optimization experiments and analysis on the waveform characteristics of ECG signals. The experiments were also conducted with conventional methods using HRV parameters and frequency features as a benchmark test. The data used in this study were obtained from Kwangwoon University in Korea (13 subjects, Case 1) and KU Leuven University in Belgium (9 subjects, Case 2). Experiments were designed according to various experimental protocols to elicit stressful conditions. The proposed framework to recognize stress conditions, the Deep ECGNet, outperformed the conventional approaches with the highest accuracy of 87.39% for Case 1 and 73.96% for Case 2, respectively, that is, 16.22% and 10.98% improvements compared with those of the conventional HRV method. We proposed an optimal deep learning architecture and its parameters for stress recognition, and the theoretical consideration on how to design the deep learning structure based on the periodic patterns of the raw ECG data. Experimental results in this study have proved that the proposed deep learning model, the Deep ECGNet, is an optimal structure to recognize the stress conditions using ultra short-term ECG data.

  7. Assessing the Value of Information for Identifying Optimal Floodplain Management Portfolios

    NASA Astrophysics Data System (ADS)

    Read, L.; Bates, M.; Hui, R.; Lund, J. R.

    2014-12-01

    Floodplain management is a complex portfolio problem that can be analyzed from an integrated perspective incorporating traditionally structural and nonstructural options. One method to identify effective strategies for preparing, responding to, and recovering from floods is to optimize for a portfolio of temporary (emergency) and permanent floodplain management options. A risk-based optimization approach to this problem assigns probabilities to specific flood events and calculates the associated expected damages. This approach is currently limited by: (1) the assumption of perfect flood forecast information, i.e. implementing temporary management activities according to the actual flood event may differ from optimizing based on forecasted information and (2) the inability to assess system resilience across a range of possible future events (risk-centric approach). Resilience is defined here as the ability of a system to absorb and recover from a severe disturbance or extreme event. In our analysis, resilience is a system property that requires integration of physical, social, and information domains. This work employs a 3-stage linear program to identify the optimal mix of floodplain management options using conditional probabilities to represent perfect and imperfect flood stages (forecast vs. actual events). We assess the value of information in terms of minimizing damage costs for two theoretical cases - urban and rural systems. We use portfolio analysis to explore how the set of optimal management options differs depending on whether the goal is for the system to be risk-adverse to a specified event or resilient over a range of events.

  8. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  9. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  10. D-Optimal Experimental Design for Contaminant Source Identification

    NASA Astrophysics Data System (ADS)

    Sai Baba, A. K.; Alexanderian, A.

    2016-12-01

    Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.

  11. Real-time PCR probe optimization using design of experiments approach.

    PubMed

    Wadle, S; Lehnert, M; Rubenwolf, S; Zengerle, R; von Stetten, F

    2016-03-01

    Primer and probe sequence designs are among the most critical input factors in real-time polymerase chain reaction (PCR) assay optimization. In this study, we present the use of statistical design of experiments (DOE) approach as a general guideline for probe optimization and more specifically focus on design optimization of label-free hydrolysis probes that are designated as mediator probes (MPs), which are used in reverse transcription MP PCR (RT-MP PCR). The effect of three input factors on assay performance was investigated: distance between primer and mediator probe cleavage site; dimer stability of MP and target sequence (influenza B virus); and dimer stability of the mediator and universal reporter (UR). The results indicated that the latter dimer stability had the greatest influence on assay performance, with RT-MP PCR efficiency increased by up to 10% with changes to this input factor. With an optimal design configuration, a detection limit of 3-14 target copies/10 μl reaction could be achieved. This improved detection limit was confirmed for another UR design and for a second target sequence, human metapneumovirus, with 7-11 copies/10 μl reaction detected in an optimum case. The DOE approach for improving oligonucleotide designs for real-time PCR not only produces excellent results but may also reduce the number of experiments that need to be performed, thus reducing costs and experimental times.

  12. Building Better Decision-Support by Using Knowledge Discovery.

    ERIC Educational Resources Information Center

    Jurisica, Igor

    2000-01-01

    Discusses knowledge-based decision-support systems that use artificial intelligence approaches. Addresses the issue of how to create an effective case-based reasoning system for complex and evolving domains, focusing on automated methods for system optimization and domain knowledge evolution that can supplement knowledge acquired from domain…

  13. Lead optimization attrition analysis (LOAA): a novel and general methodology for medicinal chemistry.

    PubMed

    Munson, Mark; Lieberman, Harvey; Tserlin, Elina; Rocnik, Jennifer; Ge, Jie; Fitzgerald, Maria; Patel, Vinod; Garcia-Echeverria, Carlos

    2015-08-01

    Herein, we report a novel and general method, lead optimization attrition analysis (LOAA), to benchmark two distinct small-molecule lead series using a relatively unbiased, simple technique and commercially available software. We illustrate this approach with data collected during lead optimization of two independent oncology programs as a case study. Easily generated graphics and attrition curves enabled us to calibrate progress and support go/no go decisions on each program. We believe that this data-driven technique could be used broadly by medicinal chemists and management to guide strategic decisions during drug discovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Procuring load curtailment from local customers under uncertainty.

    PubMed

    Mijatović, Aleksandar; Moriarty, John; Vogrinc, Jure

    2017-08-13

    Demand side response (DSR) provides a flexible approach to managing constrained power network assets. This is valuable if future asset utilization is uncertain. However there may be uncertainty over the process of procurement of DSR from customers. In this context we combine probabilistic modelling, simulation and optimization to identify economically optimal procurement policies from heterogeneous customers local to the asset, under chance constraints on the adequacy of the procured DSR. Mathematically this gives rise to a search over permutations, and we provide an illustrative example implementation and case study.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  15. Enhanced ergonomics approaches for product design: a user experience ecosystem perspective and case studies.

    PubMed

    Xu, Wei

    2014-01-01

    This paper first discusses the major inefficiencies faced in current human factors and ergonomics (HFE) approaches: (1) delivering an optimal end-to-end user experience (UX) to users of a solution across its solution lifecycle stages; (2) strategically influencing the product business and technology capability roadmaps from a UX perspective and (3) proactively identifying new market opportunities and influencing the platform architecture capabilities on which the UX of end products relies. In response to these challenges, three case studies are presented to demonstrate how enhanced ergonomics design approaches have effectively addressed the challenges faced in current HFE approaches. Then, the enhanced ergonomics design approaches are conceptualised by a user-experience ecosystem (UXE) framework, from a UX ecosystem perspective. Finally, evidence supporting the UXE, the advantage and the formalised process for executing UXE and methodological considerations are discussed. Practitioner Summary: This paper presents enhanced ergonomics approaches to product design via three case studies to effectively address current HFE challenges by leveraging a systematic end-to-end UX approach, UX roadmaps and emerging UX associated with prioritised user needs and usages. Thus, HFE professionals can be more strategic, creative and influential.

  16. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  17. Optimizing Power–Frequency Droop Characteristics of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guggilam, Swaroop S.; Zhao, Changhong; Dall Anese, Emiliano

    This paper outlines a procedure to design power-frequency droop slopes for distributed energy resources (DERs) installed in distribution networks to optimally participate in primary frequency response. In particular, the droop slopes are engineered such that DERs respond in proportion to their power ratings and they are not unfairly penalized in power provisioning based on their location in the distribution network. The main contribution of our approach is that a guaranteed level of frequency regulation can be guaranteed at the feeder head, while ensuring that the outputs of individual DERs conform to some well-defined notion of fairness. The approach we adoptmore » leverages an optimization-based perspective and suitable linearizations of the power-flow equations to embed notions of fairness and information regarding the physics of the power flows within the distribution network into the droop slopes. Time-domain simulations from a differential algebraic equation model of the 39-bus New England test-case system augmented with three instances of the IEEE 37-node distribution-network with frequency-sensitive DERs are provided to validate our approach.« less

  18. Design of experiments applications in bioprocessing: concepts and approach.

    PubMed

    Kumar, Vijesh; Bhalla, Akriti; Rathore, Anurag S

    2014-01-01

    Most biotechnology unit operations are complex in nature with numerous process variables, feed material attributes, and raw material attributes that can have significant impact on the performance of the process. Design of experiments (DOE)-based approach offers a solution to this conundrum and allows for an efficient estimation of the main effects and the interactions with minimal number of experiments. Numerous publications illustrate application of DOE towards development of different bioprocessing unit operations. However, a systematic approach for evaluation of the different DOE designs and for choosing the optimal design for a given application has not been published yet. Through this work we have compared the I-optimal and D-optimal designs to the commonly used central composite and Box-Behnken designs for bioprocess applications. A systematic methodology is proposed for construction of the model and for precise prediction of the responses for the three case studies involving some of the commonly used unit operations in downstream processing. Use of Akaike information criterion for model selection has been examined and found to be suitable for the applications under consideration. © 2013 American Institute of Chemical Engineers.

  19. A systematic reactor design approach for the synthesis of active pharmaceutical ingredients.

    PubMed

    Emenike, Victor N; Schenkendorf, René; Krewer, Ulrike

    2018-05-01

    Today's highly competitive pharmaceutical industry is in dire need of an accelerated transition from the drug development phase to the drug production phase. At the heart of this transition are chemical reactors that facilitate the synthesis of active pharmaceutical ingredients (APIs) and whose design can affect subsequent processing steps. Inspired by this challenge, we present a model-based approach for systematic reactor design. The proposed concept is based on the elementary process functions (EPF) methodology to select an optimal reactor configuration from existing state-of-the-art reactor types or can possibly lead to the design of novel reactors. As a conceptual study, this work summarizes the essential steps in adapting the EPF approach to optimal reactor design problems in the field of API syntheses. Practically, the nucleophilic aromatic substitution of 2,4-difluoronitrobenzene was analyzed as a case study of pharmaceutical relevance. Here, a small-scale tubular coil reactor with controlled heating was identified as the optimal set-up reducing the residence time by 33% in comparison to literature values. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  1. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  2. Solving bi-level optimization problems in engineering design using kriging models

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Liu, Xiaojie; Du, Gang

    2018-05-01

    Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.

  3. A multi-material topology optimization approach for wrinkle-free design of cable-suspended membrane structures

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-06-01

    In order to eliminate stress-related wrinkles in cable-suspended membrane structures and to provide simple and reliable deployment, this study presents a multi-material topology optimization model and an effective solution procedure for generating optimal connected layouts for membranes and cables. On the basis of the principal stress criterion of membrane wrinkling behavior and the density-based interpolation of multi-phase materials, the optimization objective is to maximize the total structural stiffness while satisfying principal stress constraints and specified material volume requirements. By adopting the cosine-type relaxation scheme to avoid the stress singularity phenomenon, the optimization model is successfully solved through a standard gradient-based algorithm. Four-corner tensioned membrane structures with different loading cases were investigated to demonstrate the effectiveness of the proposed method in automatically finding the optimal design composed of curved boundary cables and wrinkle-free membranes.

  4. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    PubMed

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  5. Optimization of end-pumped, actively Q-switched quasi-III-level lasers.

    PubMed

    Jabczynski, Jan K; Gorajek, Lukasz; Kwiatkowski, Jacek; Kaskow, Mateusz; Zendzian, Waldemar

    2011-08-15

    The new model of end-pumped quasi-III-level laser considering transient pumping processes, ground-state-depletion and up-conversion effects was developed. The model consists of two parts: pumping stage and Q-switched part, which can be separated in a case of active Q-switching regime. For pumping stage the semi-analytical model was developed, enabling the calculations for final occupation of upper laser level for given pump power and duration, spatial profile of pump beam, length and dopant level of gain medium. For quasi-stationary inversion, the optimization procedure of Q-switching regime based on Lagrange multiplier technique was developed. The new approach for optimization of CW regime of quasi-three-level lasers was developed to optimize the Q-switched lasers operating with high repetition rates. Both methods of optimizations enable calculation of optimal absorbance of gain medium and output losses for given pump rate. © 2011 Optical Society of America

  6. Optimal control in adaptive optics modeling of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Herrmann, J.

    The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.

  7. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  8. Efficacy of very fast simulated annealing global optimization method for interpretation of self-potential anomaly by different forward formulation over 2D inclined sheet type structure

    NASA Astrophysics Data System (ADS)

    Biswas, A.; Sharma, S. P.

    2012-12-01

    Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the best result without any ambiguity and smaller uncertainty. Keywords: SP anomaly, inclined sheet, 2D structure, forward problems, VFSA Optimization,

  9. Rapid design and optimization of low-thrust rendezvous/interception trajectory for asteroid deflection missions

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Zhu, Yongsheng; Wang, Yukai

    2014-02-01

    Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.

  10. Robust Sliding Mode Control Based on GA Optimization and CMAC Compensation for Lower Limb Exoskeleton

    PubMed Central

    Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei

    2016-01-01

    A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353

  11. Generalized t-statistic for two-group classification.

    PubMed

    Komori, Osamu; Eguchi, Shinto; Copas, John B

    2015-06-01

    In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.

  12. Robust Sliding Mode Control Based on GA Optimization and CMAC Compensation for Lower Limb Exoskeleton.

    PubMed

    Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Dong, Wei

    2016-01-01

    A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems.

  13. Application of multiobjective optimization to scheduling capacity expansion of urban water resource systems

    NASA Astrophysics Data System (ADS)

    Mortazavi-Naeini, Mohammad; Kuczera, George; Cui, Lijie

    2014-06-01

    Significant population increase in urban areas is likely to result in a deterioration of drought security and level of service provided by urban water resource systems. One way to cope with this is to optimally schedule the expansion of system resources. However, the high capital costs and environmental impacts associated with expanding or building major water infrastructure warrant the investigation of scheduling system operational options such as reservoir operating rules, demand reduction policies, and drought contingency plans, as a way of delaying or avoiding the expansion of water supply infrastructure. Traditionally, minimizing cost has been considered the primary objective in scheduling capacity expansion problems. In this paper, we consider some of the drawbacks of this approach. It is shown that there is no guarantee that the social burden of coping with drought emergencies is shared equitably across planning stages. In addition, it is shown that previous approaches do not adequately exploit the benefits of joint optimization of operational and infrastructure options and do not adequately address the need for the high level of drought security expected for urban systems. To address these shortcomings, a new multiobjective optimization approach to scheduling capacity expansion in an urban water resource system is presented and illustrated in a case study involving the bulk water supply system for Canberra. The results show that the multiobjective approach can address the temporal equity issue of sharing the burden of drought emergencies and that joint optimization of operational and infrastructure options can provide solutions superior to those just involving infrastructure options.

  14. Exploiting variability for energy optimization of parallel programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavrijsen, Wim; Iancu, Costin; de Jong, Wibe

    2016-04-18

    Here in this paper we present optimizations that use DVFS mechanisms to reduce the total energy usage in scientific applications. Our main insight is that noise is intrinsic to large scale parallel executions and it appears whenever shared resources are contended. The presence of noise allows us to identify and manipulate any program regions amenable to DVFS. When compared to previous energy optimizations that make per core decisions using predictions of the running time, our scheme uses a qualitative approach to recognize the signature of executions amenable to DVFS. By recognizing the "shape of variability" we can optimize codes withmore » highly dynamic behavior, which pose challenges to all existing DVFS techniques. We validate our approach using offline and online analyses for one-sided and two-sided communication paradigms. We have applied our methods to NWChem, and we show best case improvements in energy use of 12% at no loss in performance when using online optimizations running on 720 Haswell cores with one-sided communication. With NWChem on MPI two-sided and offline analysis, capturing the initialization, we find energy savings of up to 20%, with less than 1% performance cost.« less

  15. Universal optimal working cycles of molecular motors.

    PubMed

    Efremov, Artem; Wang, Zhisong

    2011-04-07

    Molecular motors capable of directional track-walking or rotation are abundant in living cells, and inspire the emerging field of artificial nanomotors. Some biomotors can convert 90% of free energy from chemical fuels into usable mechanical work, and the same motors still maintain a speed sufficient for cellular functions. This study exposed a new regime of universal optimization that amounts to a thermodynamically best working regime for molecular motors but is unfamiliar in macroscopic engines. For the ideal case of zero energy dissipation, the universally optimized working cycle for molecular motors is infinitely slow like Carnot cycle for heat engines. But when a small amount of energy dissipation reduces energy efficiency linearly from 100%, the speed is recovered exponentially due to Boltzmann's law. Experimental data on a major biomotor (kinesin) suggest that the regime of universal optimization has been largely approached in living cells, underpinning the extreme efficiency-speed trade-off in biomotors. The universal optimization and its practical approachability are unique thermodynamic advantages of molecular systems over macroscopic engines in facilitating motor functions. The findings have important implications for the natural evolution of biomotors as well as the development of artificial counterparts.

  16. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    PubMed Central

    Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris

    2014-01-01

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050

  17. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    PubMed

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. On the optimal systems of subalgebras for the equations of hydrodynamic stability analysis of smooth shear flows and their group-invariant solutions

    NASA Astrophysics Data System (ADS)

    Hau, Jan-Niklas; Oberlack, Martin; Chagelishvili, George

    2017-04-01

    We present a unifying solution framework for the linearized compressible equations for two-dimensional linearly sheared unbounded flows using the Lie symmetry analysis. The full set of symmetries that are admitted by the underlying system of equations is employed to systematically derive the one- and two-dimensional optimal systems of subalgebras, whose connected group reductions lead to three distinct invariant ansatz functions for the governing sets of partial differential equations (PDEs). The purpose of this analysis is threefold and explicitly we show that (i) there are three invariant solutions that stem from the optimal system. These include a general ansatz function with two free parameters, as well as the ansatz functions of the Kelvin mode and the modal approach. Specifically, the first approach unifies these well-known ansatz functions. By considering two limiting cases of the free parameters and related algebraic transformations, the general ansatz function is reduced to either of them. This fact also proves the existence of a link between the Kelvin mode and modal ansatz functions, as these appear to be the limiting cases of the general one. (ii) The Lie algebra associated with the Lie group admitted by the PDEs governing the compressible dynamics is a subalgebra associated with the group admitted by the equations governing the incompressible dynamics, which allows an additional (scaling) symmetry. Hence, any consequences drawn from the compressible case equally hold for the incompressible counterpart. (iii) In any of the systems of ordinary differential equations, derived by the three ansatz functions in the compressible case, the linearized potential vorticity is a conserved quantity that allows us to analyze vortex and wave mode perturbations separately.

  19. A predictive control framework for optimal energy extraction of wind farms

    NASA Astrophysics Data System (ADS)

    Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.

    2016-09-01

    This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.

  20. Optimality of the barrier strategy in de Finetti's dividend problem for spectrally negative Lévy processes: An alternative approach

    NASA Astrophysics Data System (ADS)

    Yin, Chuancun; Wang, Chunwei

    2009-11-01

    The optimal dividend problem proposed in de Finetti [1] is to find the dividend-payment strategy that maximizes the expected discounted value of dividends which are paid to the shareholders until the company is ruined. Avram et al. [9] studied the case when the risk process is modelled by a general spectrally negative Lévy process and Loeffen [10] gave sufficient conditions under which the optimal strategy is of the barrier type. Recently Kyprianou et al. [11] strengthened the result of Loeffen [10] which established a larger class of Lévy processes for which the barrier strategy is optimal among all admissible ones. In this paper we use an analytical argument to re-investigate the optimality of barrier dividend strategies considered in the three recent papers.

  1. Direct aperture optimization using an inverse form of back-projection.

    PubMed

    Zhu, Xiaofeng; Cullip, Timothy; Tracton, Gregg; Tang, Xiaoli; Lian, Jun; Dooley, John; Chang, Sha X

    2014-03-06

    Direct aperture optimization (DAO) has been used to produce high dosimetric quality intensity-modulated radiotherapy (IMRT) treatment plans with fast treatment delivery by directly modeling the multileaf collimator segment shapes and weights. To improve plan quality and reduce treatment time for our in-house treatment planning system, we implemented a new DAO approach without using a global objective function (GFO). An index concept is introduced as an inverse form of back-projection used in the CT multiplicative algebraic reconstruction technique (MART). The index, introduced for IMRT optimization in this work, is analogous to the multiplicand in MART. The index is defined as the ratio of the optima over the current. It is assigned to each voxel and beamlet to optimize the fluence map. The indices for beamlets and segments are used to optimize multileaf collimator (MLC) segment shapes and segment weights, respectively. Preliminary data show that without sacrificing dosimetric quality, the implementation of the DAO reduced average IMRT treatment time from 13 min to 8 min for the prostate, and from 15 min to 9 min for the head and neck using our in-house treatment planning system PlanUNC. The DAO approach has also shown promise in optimizing rotational IMRT with burst mode in a head and neck test case.

  2. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  3. Conceptual Design and Structural Optimization of NASA Environmentally Responsible Aviation (ERA) Hybrid Wing Body Aircraft

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Gern, Frank H.

    2016-01-01

    Simultaneously achieving the fuel consumption and noise reduction goals set forth by NASA's Environmentally Responsible Aviation (ERA) project requires innovative and unconventional aircraft concepts. In response, advanced hybrid wing body (HWB) aircraft concepts have been proposed and analyzed as a means of meeting these objectives. For the current study, several HWB concepts were analyzed using the Hybrid wing body Conceptual Design and structural optimization (HCDstruct) analysis code. HCDstruct is a medium-fidelity finite element based conceptual design and structural optimization tool developed to fill the critical analysis gap existing between lower order structural sizing approaches and detailed, often finite element based sizing methods for HWB aircraft concepts. Whereas prior versions of the tool used a half-model approach in building the representative finite element model, a full wing-tip-to-wing-tip modeling capability was recently added to HCDstruct, which alleviated the symmetry constraints at the model centerline in place of a free-flying model and allowed for more realistic center body, aft body, and wing loading and trim response. The latest version of HCDstruct was applied to two ERA reference cases, including the Boeing Open Rotor Engine Integration On an HWB (OREIO) concept and the Boeing ERA-0009H1 concept, and results agreed favorably with detailed Boeing design data and related Flight Optimization System (FLOPS) analyses. Following these benchmark cases, HCDstruct was used to size NASA's ERA HWB concepts and to perform a related scaling study.

  4. Optimizing a Workplace Learning Pattern: A Case Study from Aviation

    ERIC Educational Resources Information Center

    Mavin, Timothy John; Roth, Wolff-Michael

    2015-01-01

    Purpose: This study aims to contribute to current research on team learning patterns. It specifically addresses some negative perceptions of the job performance learning pattern. Design/methodology/approach: Over a period of three years, qualitative and quantitative data were gathered on pilot learning in the workplace. The instructional modes…

  5. Spatiotemporal radiotherapy planning using a global optimization approach

    NASA Astrophysics Data System (ADS)

    Adibi, Ali; Salari, Ehsan

    2018-02-01

    This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.

  6. Persistent Mullerian Duct Syndrome: an interesting case report.

    PubMed

    Farag, S; Sutton, P; Leow, K S; Kosai, N R; Razman, J; Hanafiah, H; Das, S

    2013-01-01

    Transverse testicular ectopia is an uncommon disorder of testicular ectopia. Nearly thirty percent of the cases is associated with Persistent mullerian duct syndrome which is characterized by karyotypically normal males with retained mullerian derivatives. Understanding the natural process of the condition and the association with malignant potential will allow for a better understanding of the optimal surgical approach. This is a case report of young male presented a left sided inguinal hernia in which the sac contained both testes and uterus. The literature review of the syndrome will be discussed.

  7. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  8. A neural network based implementation of an MPC algorithm applied in the control systems of electromechanical plants

    NASA Astrophysics Data System (ADS)

    Marusak, Piotr M.; Kuntanapreeda, Suwat

    2018-01-01

    The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.

  9. Municipal solid waste transportation optimisation with vehicle routing approach: case study of Pontianak City, West Kalimantan

    NASA Astrophysics Data System (ADS)

    Kamal, M. A.; Youlla, D.

    2018-03-01

    Municipal solid waste (MSW) transportation in Pontianak City becomes an issue that need to be tackled by the relevant agencies. The MSW transportation service in Pontianak City currently requires very high resources especially in vehicle usage. Increasing the number of fleets has not been able to increase service levels while garbage volume is growing every year along with population growth. In this research, vehicle routing optimization approach was used to find optimal and efficient routes of vehicle cost in transporting garbage from several Temporary Garbage Dump (TGD) to Final Garbage Dump (FGD). One of the problems of MSW transportation is that there is a TGD which exceed the the vehicle capacity and must be visited more than once. The optimal computation results suggest that the municipal authorities only use 3 vehicles from 5 vehicles provided with the total minimum cost of IDR. 778,870. The computation time to search optimal route and minimal cost is very time consuming. This problem is influenced by the number of constraints and decision variables that have are integer value.

  10. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  11. Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.

    PubMed

    Mazandarani, Mehran; Pariz, Naser

    2018-05-01

    This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A simulation study to compare three self-controlled case series approaches: correction for violation of assumption and evaluation of bias.

    PubMed

    Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J

    2013-08-01

    The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Anatomical approach to permanent His bundle pacing: Optimizing His bundle capture.

    PubMed

    Vijayaraman, Pugazhendhi; Dandamudi, Gopi

    2016-01-01

    Permanent His bundle pacing is a physiological alternative to right ventricular pacing. In this article we describe our approach to His bundle pacing in patients with AV nodal and intra-Hisian conduction disease. It is essential for the implanters to understand the anatomic variations of the His bundle course and its effect on the type of His bundle pacing achieved. We describe several case examples to illustrate our anatomical approach to permanent His bundle pacing in this article. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning.

    PubMed

    Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn

    2017-06-01

    To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.

  15. A subgradient approach for constrained binary optimization via quantum adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Karimi, Sahar; Ronagh, Pooya

    2017-08-01

    Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.

  16. A Novel Approach of Battery Energy Storage for Improving Value of Wind Power in Deregulated Markets

    NASA Astrophysics Data System (ADS)

    Nguyen, Y. Minh; Yoon, Yong Tae

    2013-06-01

    Wind power producers face many regulation costs in deregulated environment, which remarkably lowers the value of wind power in comparison with the conventional sources. One of these costs is associated with the real-time variation of power output and being paid in frequency control market according to the variation band. In this regard, this paper presents a new approach to the scheduling and operation of battery energy storage installed in wind generation system. This approach depends on the statistic data of wind generation and the prediction of frequency control market prices to determine the optimal charging and discharging of batteries in real-time, which ultimately gives the minimum cost of frequency regulation for wind power producers. The optimization problem is formulated as the trade-off between the decrease in regulation payment and the increase in the cost of using battery energy storage. The approach is illustrated in the case study and the results of simulation show its effectiveness.

  17. SU-E-T-539: Fixed Versus Variable Optimization Points in Combined-Mode Modulated Arc Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kainz, K; Prah, D; Ahunbay, E

    2014-06-01

    Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91more » OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred.« less

  18. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  19. Multidisciplinary Optimization and Damage Tolerance of Stiffened Structures

    NASA Astrophysics Data System (ADS)

    Jrad, Mohamed

    THE structural optimization of a cantilever aircraft wing with curvilinear spars and ribs and stiffeners is described. For the optimization of a complex wing, a common strategy is to divide the optimization procedure into two subsystems: the global wing optimization which optimizes the geometry of spars, ribs and wing skins; and the local panel optimization which optimizes the design variables of local panels bordered by spars and ribs. The stiffeners are placed on the local panels to increase the stiffness and buckling resistance. During the local panel optimization, the stress information is taken from the global model as a displacement boundary condition on the panel edges using the so-called "Global-Local Approach". Particle swarm optimization is used in the integration of global/local optimization to optimize the SpaRibs. Parallel computing approach has been developed in the Python programming language to reduce the CPU time. The license cycle-check method and memory self-adjustment method are two approaches that have been applied in the parallel framework in order to optimize the use of the resources by reducing the license and memory limitations and making the code robust. The integrated global-local optimization approach has been applied to subsonic NASA common research model (CRM) wing, which proves the methodology's application scaling with medium fidelity FEM analysis. The structural weight of the wing has been reduced by 42% and the parallel implementation allowed a reduction in the CPU time by 89%. The aforementioned Global-Local Approach is investigated and applied to a composite panel with crack at its center. Because of composite laminates' heterogeneity, an accurate analysis of these requires very high time and storage space. A possible alternative to reduce the computational complexity is the global-local analysis which involves an approximate analysis of the whole structure followed by a detailed analysis of a significantly smaller region of interest. Buckling analysis of a composite panel with attached longitudinal stiffeners under compressive loads is performed using Ritz method with trigonometric functions. Results are then compared to those from Abaqus FEA for different shell elements. The case of composite panel with one, two, and three stiffeners is investigated. The effect of the distance between the stiffeners on the buckling load is also studied. The variation of the buckling load and buckling modes with the stiffeners' height is investigated. It is shown that there is an optimum value of stiffeners' height beyond which the structural response of the stiffened panel is not improved and the buckling load does not increase. Furthermore, there exist different critical values of stiffener's height at which the buckling mode of the structure changes. Next, buckling analysis of a composite panel with two straight stiffeners and a crack at the center is performed. Finally, buckling analysis of a composite panel with curvilinear stiffeners and a crack at the center is also conducted. Results show that panels with a larger crack have a reduced buckling load and that the buckling load decreases slightly when using higher order 2D shell FEM elements. A damage tolerance framework, EBF3PanelOpt, has been developed to design and analyze curvilinearly stiffened panels. The framework is written with the scripting language Python and it interacts with the commercial software MSC. Patran (for geometry and mesh creation), MSC. Nastran (for finite element analysis), and MSC. Marc (for damage tolerance analysis). The crack location is set to the location of the maximum value of the major principal stress while its orientation is set normal to the major principal axis direction. The effective stress intensity factor is calculated using the Virtual Crack Closure Technique and compared to the fracture toughness of the material in order to decide whether the crack will expand or not. The ratio of these two quantities is used as a constraint, along with the buckling factor, Kreisselmeier and Steinhauser criteria, and crippling factor. The EBF3PanelOpt framework is integrated within a two-step Particle Swarm Optimization in order to minimize the weight of the panel while satisfying the aforementioned constraints and using all the shape and thickness parameters as design variables. The result of the PSO is used then as an initial guess for the Gradient Based Optimization using only the thickness parameters as design variables and employing VisualDOC. Stiffened panel with two curvilinear stiffeners is optimized for two load cases. In both cases, significant reduction has been made for the panel's weight.

  20. Toward an optimal online checkpoint solution under a two-level HPC checkpoint model

    DOE PAGES

    Di, Sheng; Robert, Yves; Vivien, Frederic; ...

    2016-03-29

    The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less

  1. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.

  2. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  3. Multilevel decomposition approach to integrated aerodynamic/dynamic/structural optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Pritchard, Jocelyn I.; Adelman, Howard M.; Mantay, Wayne R.

    1994-01-01

    This paper describes an integrated aerodynamic, dynamic, and structural (IADS) optimization procedure for helicopter rotor blades. The procedure combines performance, dynamics, and structural analyses with a general purpose optimizer using multilevel decomposition techniques. At the upper level, the structure is defined in terms of local quantities (stiffnesses, mass, and average strains). At the lower level, the structure is defined in terms of local quantities (detailed dimensions of the blade structure and stresses). The IADS procedure provides an optimization technique that is compatible with industrial design practices in which the aerodynamic and dynamic design is performed at a global level and the structural design is carried out at a detailed level with considerable dialogue and compromise among the aerodynamic, dynamic, and structural groups. The IADS procedure is demonstrated for several cases.

  4. Efficiency of operation of wind turbine rotors optimized by the Glauert and Betz methods

    NASA Astrophysics Data System (ADS)

    Okulov, V. L.; Mikkelsen, R.; Litvinov, I. V.; Naumov, I. V.

    2015-11-01

    The models of two types of rotors with blades constructed using different optimization methods are compared experimentally. In the first case, the Glauert optimization by the pulsed method is used, which is applied independently for each individual blade cross section. This method remains the main approach in designing rotors of various duties. The construction of the other rotor is based on the Betz idea about optimization of rotors by determining a special distribution of circulation over the blade, which ensures the helical structure of the wake behind the rotor. It is established for the first time as a result of direct experimental comparison that the rotor constructed using the Betz method makes it possible to extract more kinetic energy from the homogeneous incoming flow.

  5. Optimization of brushless direct current motor design using an intelligent technique.

    PubMed

    Shabanian, Alireza; Tousiwas, Armin Amini Poustchi; Pourmandi, Massoud; Khormali, Aminollah; Ataei, Abdolhay

    2015-07-01

    This paper presents a method for the optimal design of a slotless permanent magnet brushless DC (BLDC) motor with surface mounted magnets using an improved bee algorithm (IBA). The characteristics of the motor are expressed as functions of motor geometries. The objective function is a combination of losses, volume and cost to be minimized simultaneously. This method is based on the capability of swarm-based algorithms in finding the optimal solution. One sample case is used to illustrate the performance of the design approach and optimization technique. The IBA has a better performance and speed of convergence compared with bee algorithm (BA). Simulation results show that the proposed method has a very high/efficient performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Polarimetric SAR Interferometry to Monitor Land Subsidence in Tehran

    NASA Astrophysics Data System (ADS)

    Sadeghi, Zahra; Valadan Zoej, Mohammad Javad; Muller, Jan-Peter

    2016-08-01

    This letter uses a combination of ADInSAR with a coherence optimization method. Polarimetric DInSAR is able to enhance pixel phase quality and thus coherent pixel density. The coherence optimization method is a search-based approach to find the optimized scattering mechanism introduced by Navarro-Sanchez [1]. The case study is southwest of Tehran basin located in the North of Iran. It suffers from a high-rate of land subsidence and is covered by agricultural fields. Usually such an area would significantly decorrelate but applying polarimetric ADInSAR it is possible to obtain a more coherent pixel coverage. A set of dual-pol TerraSAR-X images was ordered for polarimetric ADInSAR procedure. The coherence optimization method is shown to have increased the density and phase quality of coherent pixels significantly.

  7. Multiple crack detection in 3D using a stable XFEM and global optimization

    NASA Astrophysics Data System (ADS)

    Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.

    2018-02-01

    A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.

  8. Sustainable Improvement of Urban River Network Water Quality and Flood Control Capacity by a Hydrodynamic Control Approach-Case Study of Changshu City

    NASA Astrophysics Data System (ADS)

    Xie, Chen; Yang, Fan; Liu, Guoqing; Liu, Yang; Wang, Long; Fan, Ziwu

    2017-01-01

    Water environment of urban rivers suffers degradation with the impacts of urban expansion, especially in Yangtze River Delta. The water area in cites decreased sharply, and some rivers were cut off because of estate development, which brings the problems of urban flooding, flow stagnation and water deterioration. The approach aims to enhance flood control capability and improve the urban river water quality by planning gate-pump stations surrounding the cities and optimizing the locations and functions of the pumps, sluice gates, weirs in the urban river network. These gate-pump stations together with the sluice gates and weirs guarantee the ability to control the water level in the rivers and creating hydraulic gradient artificially according to mathematical model. Therefore the flow velocity increases, which increases the rate of water exchange, the DO concentration and water body self-purification ability. By site survey and prototype measurement, the river problems are evaluated and basic data are collected. The hydrodynamic model of the river network is established and calibrated to simulate the scenarios. The schemes of water quality improvement, including optimizing layout of the water distribution projects, improvement of the flow discharge in the river network and planning the drainage capacity are decided by comprehensive Analysis. Finally the paper introduces the case study of the approach in Changshu City, where the approach is successfully implemented.

  9. Treatment of rabies in the 21st century: curing the incurable?

    PubMed

    Franka, Richard; Rupprecht, Charles E

    2011-10-01

    Despite the extreme case fatality attributable to rabies, reports of survivors provide a faint glimpse of a possibility of overcoming this deadly disease, even after clinical symptoms manifest. At present, no existing approach fulfills modern medical criteria for the optimal therapy of rabies. Until new efficacious antiviral compounds and optimized treatment protocols are developed, animal population management and vaccination of major reservoirs and vectors, minimization of the risk of viral exposures, and appropriate and early postexposure prophylaxis, remain the hallmarks of modern public health intervention.

  10. Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel

    2008-01-01

    This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.

  11. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ampomah, William; Balch, Robert; Will, Robert

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  12. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE PAGES

    Ampomah, William; Balch, Robert; Will, Robert; ...

    2017-07-01

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  13. Construction of large signaling pathways using an adaptive perturbation approach with phosphoproteomic data.

    PubMed

    Melas, Ioannis N; Mitsos, Alexander; Messinis, Dimitris E; Weiss, Thomas S; Rodriguez, Julio-Saez; Alexopoulos, Leonidas G

    2012-04-01

    Construction of large and cell-specific signaling pathways is essential to understand information processing under normal and pathological conditions. On this front, gene-based approaches offer the advantage of large pathway exploration whereas phosphoproteomic approaches offer a more reliable view of pathway activities but are applicable to small pathway sizes. In this paper, we demonstrate an experimentally adaptive approach to construct large signaling pathways from phosphoproteomic data within a 3-day time frame. Our approach--taking advantage of the fast turnaround time of the xMAP technology--is carried out in four steps: (i) screen optimal pathway inducers, (ii) select the responsive ones, (iii) combine them in a combinatorial fashion to construct a phosphoproteomic dataset, and (iv) optimize a reduced generic pathway via an Integer Linear Programming formulation. As a case study, we uncover novel players and their corresponding pathways in primary human hepatocytes by interrogating the signal transduction downstream of 81 receptors of interest and constructing a detailed model for the responsive part of the network comprising 177 species (of which 14 are measured) and 365 interactions.

  14. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  15. Optimal timing in biological processes

    USGS Publications Warehouse

    Williams, B.K.; Nichols, J.D.

    1984-01-01

    A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.

  16. The price of conserving avian phylogenetic diversity: a global prioritization approach

    PubMed Central

    Nunes, Laura A.; Turvey, Samuel T.; Rosindell, James

    2015-01-01

    The combination of rapid biodiversity loss and limited funds available for conservation represents a major global concern. While there are many approaches for conservation prioritization, few are framed as financial optimization problems. We use recently published avian data to conduct a global analysis of the financial resources required to conserve different quantities of phylogenetic diversity (PD). We introduce a new prioritization metric (ADEPD) that After Downlisting a species gives the Expected Phylogenetic Diversity at some future time. Unlike other metrics, ADEPD considers the benefits to future PD associated with downlisting a species (e.g. moving from Endangered to Vulnerable in the International Union for Conservation of Nature Red List). Combining ADEPD scores with data on the financial cost of downlisting different species provides a cost–benefit prioritization approach for conservation. We find that under worst-case spending $3915 can save 1 year of PD, while under optimal spending $1 can preserve over 16.7 years of PD. We find that current conservation spending patterns are only expected to preserve one quarter of the PD that optimal spending could achieve with the same total budget. Maximizing PD is only one approach within the wider goal of biodiversity conservation, but our analysis highlights more generally the danger involved in uninformed spending of limited resources. PMID:25561665

  17. The price of conserving avian phylogenetic diversity: a global prioritization approach.

    PubMed

    Nunes, Laura A; Turvey, Samuel T; Rosindell, James

    2015-02-19

    The combination of rapid biodiversity loss and limited funds available for conservation represents a major global concern. While there are many approaches for conservation prioritization, few are framed as financial optimization problems. We use recently published avian data to conduct a global analysis of the financial resources required to conserve different quantities of phylogenetic diversity (PD). We introduce a new prioritization metric (ADEPD) that After Downlisting a species gives the Expected Phylogenetic Diversity at some future time. Unlike other metrics, ADEPD considers the benefits to future PD associated with downlisting a species (e.g. moving from Endangered to Vulnerable in the International Union for Conservation of Nature Red List). Combining ADEPD scores with data on the financial cost of downlisting different species provides a cost-benefit prioritization approach for conservation. We find that under worst-case spending $3915 can save 1 year of PD, while under optimal spending $1 can preserve over 16.7 years of PD. We find that current conservation spending patterns are only expected to preserve one quarter of the PD that optimal spending could achieve with the same total budget. Maximizing PD is only one approach within the wider goal of biodiversity conservation, but our analysis highlights more generally the danger involved in uninformed spending of limited resources.

  18. Mode selective generation of guided waves by systematic optimization of the interfacial shear stress profile

    NASA Astrophysics Data System (ADS)

    Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice

    2015-01-01

    Piezoelectric transducers are commonly used in structural health monitoring systems to generate and measure ultrasonic guided waves (GWs) by applying interfacial shear and normal stresses to the host structure. In most cases, in order to perform damage detection, advanced signal processing techniques are required, since a minimum of two dispersive modes are propagating in the host structure. In this paper, a systematic approach for mode selection is proposed by optimizing the interfacial shear stress profile applied to the host structure, representing the first step of a global optimization of selective mode actuator design. This approach has the potential of reducing the complexity of signal processing tools as the number of propagating modes could be reduced. Using the superposition principle, an analytical method is first developed for GWs excitation by a finite number of uniform segments, each contributing with a given elementary shear stress profile. Based on this, cost functions are defined in order to minimize the undesired modes and amplify the selected mode and the optimization problem is solved with a parallel genetic algorithm optimization framework. Advantages of this method over more conventional transducers tuning approaches are that (1) the shear stress can be explicitly optimized to both excite one mode and suppress other undesired modes, (2) the size of the excitation area is not constrained and mode-selective excitation is still possible even if excitation width is smaller than all excited wavelengths, and (3) the selectivity is increased and the bandwidth extended. The complexity of the optimal shear stress profile obtained is shown considering two cost functions with various optimal excitation widths and number of segments. Results illustrate that the desired mode (A0 or S0) can be excited dominantly over other modes up to a wave power ratio of 1010 using an optimal shear stress profile.

  19. Clinical complexity and Occam's razor: navigating between Scylla and Charibdy of the geriatric practice. A case of secondary hypertension in a very old patient.

    PubMed

    Turco, Renato; Torpilliesi, Tiziana; Morghen, Sara; Bellelli, Giuseppe; Trabucchi, Marco

    2009-05-01

    The clinical approach toward elderly patients is often very complex and associated with an increased risk of medical errors. This case report is an example of how various objective (related to patient) and subjective (related to physicians) factors may influence the optimal diagnostic approach in elderly frail patients. We also discuss geriatric practice, which must be characterized by the intellectual honesty to refuse any sort of prejudices (such as ageism) and by the skill to navigate between the Scylla (ie, viewing clinical problems as unrelated to each other) and the Charibdy (ie, applying the Occam's razor principle) of the patient's complexity.

  20. Optimized strategy for the control and prevention of newly emerging influenza revealed by the spread dynamics model.

    PubMed

    Zhang, Wen-Dou; Zu, Zheng-Hu; Xu, Qing; Xu, Zhi-Jing; Liu, Jin-Jie; Zheng, Tao

    2014-01-01

    No matching vaccine is immediately available when a novel influenza strain breaks out. Several nonvaccine-related strategies must be employed to control an influenza epidemic, including antiviral treatment, patient isolation, and immigration detection. This paper presents the development and application of two regional dynamic models of influenza with Pontryagin's Maximum Principle to determine the optimal control strategies for an epidemic and the corresponding minimum antiviral stockpiles. Antiviral treatment was found to be the most effective measure to control new influenza outbreaks. In the case of inadequate antiviral resources, the preferred approach was the centralized use of antiviral resources in the early stage of the epidemic. Immigration detection was the least cost-effective; however, when used in combination with the other measures, it may play a larger role. The reasonable mix of the three control measures could reduce the number of clinical cases substantially, to achieve the optimal control of new influenza.

  1. Optimized Strategy for the Control and Prevention of Newly Emerging Influenza Revealed by the Spread Dynamics Model

    PubMed Central

    Zhang, Wen-Dou; Zu, Zheng-Hu; Xu, Qing; Xu, Zhi-Jing; Liu, Jin-Jie; Zheng, Tao

    2014-01-01

    No matching vaccine is immediately available when a novel influenza strain breaks out. Several nonvaccine-related strategies must be employed to control an influenza epidemic, including antiviral treatment, patient isolation, and immigration detection. This paper presents the development and application of two regional dynamic models of influenza with Pontryagin’s Maximum Principle to determine the optimal control strategies for an epidemic and the corresponding minimum antiviral stockpiles. Antiviral treatment was found to be the most effective measure to control new influenza outbreaks. In the case of inadequate antiviral resources, the preferred approach was the centralized use of antiviral resources in the early stage of the epidemic. Immigration detection was the least cost-effective; however, when used in combination with the other measures, it may play a larger role. The reasonable mix of the three control measures could reduce the number of clinical cases substantially, to achieve the optimal control of new influenza. PMID:24392151

  2. Discrete harmony search algorithm for scheduling and rescheduling the reprocessing problems in remanufacturing: a case study

    NASA Astrophysics Data System (ADS)

    Gao, Kaizhou; Wang, Ling; Luo, Jianping; Jiang, Hua; Sadollah, Ali; Pan, Quanke

    2018-06-01

    In this article, scheduling and rescheduling problems with increasing processing time and new job insertion are studied for reprocessing problems in the remanufacturing process. To handle the unpredictability of reprocessing time, an experience-based strategy is used. Rescheduling strategies are applied for considering the effect of increasing reprocessing time and the new subassembly insertion. To optimize the scheduling and rescheduling objective, a discrete harmony search (DHS) algorithm is proposed. To speed up the convergence rate, a local search method is designed. The DHS is applied to two real-life cases for minimizing the maximum completion time and the mean of earliness and tardiness (E/T). These two objectives are also considered together as a bi-objective problem. Computational optimization results and comparisons show that the proposed DHS is able to solve the scheduling and rescheduling problems effectively and productively. Using the proposed approach, satisfactory optimization results can be achieved for scheduling and rescheduling on a real-life shop floor.

  3. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    NASA Technical Reports Server (NTRS)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.

  4. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  5. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.

  6. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  7. Incarcerated giant uterine leiomyoma within an incisional hernia: a case report.

    PubMed

    Exarchos, Georgios; Vlahos, Nikolaos; Dellaportas, Dionysios; Metaxa, Linda; Theodosopoulos, Theodosios

    2017-11-01

    Uterine leiomyomas presenting as incarcerated or strangulated hernias in surgical emergencies are extremely rare and should be considered in the differential diagnosis in patients with known uterine fibroids and an irreducible ventral abdominal wall hernia. Detailed history and multidisciplinary approach optimize the diagnosis and decision making toward surgical treatment.

  8. Introducing Whole-Systems Design to First-Year Engineering Students with Case Studies

    ERIC Educational Resources Information Center

    Blizzard, Jackie; Klotz, Leidy; Pradhan, Alok; Dukes, Michael

    2012-01-01

    Purpose: A whole-systems approach, which seeks to optimize an entire system for multiple benefits, not isolated components for single benefits, is essential to engineering design for radically improved sustainability performance. Based on real-world applications of whole-systems design, the Rocky Mountain Institute (RMI) is developing educational…

  9. Small-angle neutron scattering reveals the assembly mode and oligomeric architecture of TET, a large, dodecameric aminopeptidase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appolaire, Alexandre; Girard, Eric; Colombo, Matteo

    2014-11-01

    The present work illustrates that small-angle neutron scattering, deuteration and contrast variation, combined with in vitro particle reconstruction, constitutes a very efficient approach to determine subunit architectures in large, symmetric protein complexes. In the case of the 468 kDa heterododecameric TET peptidase machine, it was demonstrated that the assembly of the 12 subunits is a highly controlled process and represents a way to optimize the catalytic efficiency of the enzyme. The specific self-association of proteins into oligomeric complexes is a common phenomenon in biological systems to optimize and regulate their function. However, de novo structure determination of these important complexesmore » is often very challenging for atomic-resolution techniques. Furthermore, in the case of homo-oligomeric complexes, or complexes with very similar building blocks, the respective positions of subunits and their assembly pathways are difficult to determine using many structural biology techniques. Here, an elegant and powerful approach based on small-angle neutron scattering is applied, in combination with deuterium labelling and contrast variation, to elucidate the oligomeric organization of the quaternary structure and the assembly pathways of 468 kDa, hetero-oligomeric and symmetric Pyrococcus horikoshii TET2–TET3 aminopeptidase complexes. The results reveal that the topology of the PhTET2 and PhTET3 dimeric building blocks within the complexes is not casual but rather suggests that their quaternary arrangement optimizes the catalytic efficiency towards peptide substrates. This approach bears important potential for the determination of quaternary structures and assembly pathways of large oligomeric and symmetric complexes in biological systems.« less

  10. Extensively Drug-Resistant Tuberculosis: Principles of Resistance, Diagnosis, and Management.

    PubMed

    Wilson, John W; Tsukayama, Dean T

    2016-04-01

    Extensively drug-resistant (XDR) tuberculosis (TB) is an unfortunate by-product of mankind's medical and pharmaceutical ingenuity during the past 60 years. Although new drug developments have enabled TB to be more readily curable, inappropriate TB management has led to the emergence of drug-resistant disease. Extensively drug-resistant TB describes Mycobacterium tuberculosis that is collectively resistant to isoniazid, rifampin, a fluoroquinolone, and an injectable agent. It proliferates when established case management and infection control procedures are not followed. Optimized treatment outcomes necessitate time-sensitive diagnoses, along with expanded combinations and prolonged durations of antimicrobial drug therapy. The challenges to public health institutions are immense and most noteworthy in underresourced communities and in patients coinfected with human immunodeficiency virus. A comprehensive and multidisciplinary case management approach is required to optimize outcomes. We review the principles of TB drug resistance and the risk factors, diagnosis, and managerial approaches for extensively drug-resistant TB. Treatment outcomes, cost, and unresolved medical issues are also discussed. Copyright © 2016 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  11. [A modern Kaspar Hauser].

    PubMed

    Etzersdorfer, Elmar; Schäfer, Monika; Becker-Pfaff, Johannes

    2002-07-01

    The case of a mentally disturbed young man is described, whose identity could not be revealed over more than four months. It is compared to Kaspar Hauser, similarities and differences are highlighted. The "Kaspar Hauser of Stuttgart" is retrospectively revealed as a young man with experiences of severe deprivation. The missing case history and independent history were an obstacle in the treatment, albeit not fundamental, advantages being reached with a relation-oriented approach. Informations obtained after clarification of the identity of the young man suggest a multiplication of unfavourable circumstances, leading to sub-optimal treatment and finally the "Kaspar-Hauser-Situation". Contributing were his belonging to the less integrated group of migrants in Germany, as well as severe deprivation in the family and a generally sub-optimal treatment of mentally disturbed young patients.

  12. Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Oza, Nikunj C.

    2011-01-01

    In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.

  13. A Bayesian approach for estimating under-reported dengue incidence with a focus on non-linear associations between climate and dengue in Dhaka, Bangladesh.

    PubMed

    Sharmin, Sifat; Glass, Kathryn; Viennet, Elvina; Harley, David

    2018-04-01

    Determining the relation between climate and dengue incidence is challenging due to under-reporting of disease and consequent biased incidence estimates. Non-linear associations between climate and incidence compound this. Here, we introduce a modelling framework to estimate dengue incidence from passive surveillance data while incorporating non-linear climate effects. We estimated the true number of cases per month using a Bayesian generalised linear model, developed in stages to adjust for under-reporting. A semi-parametric thin-plate spline approach was used to quantify non-linear climate effects. The approach was applied to data collected from the national dengue surveillance system of Bangladesh. The model estimated that only 2.8% (95% credible interval 2.7-2.8) of all cases in the capital Dhaka were reported through passive case reporting. The optimal mean monthly temperature for dengue transmission is 29℃ and average monthly rainfall above 15 mm decreases transmission. Our approach provides an estimate of true incidence and an understanding of the effects of temperature and rainfall on dengue transmission in Dhaka, Bangladesh.

  14. Optimization of affinity, specificity and function of designed influenza inhibitors using deep sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, Timothy A.; Chevalier, Aaron; Song, Yifan

    2012-06-19

    We show that comprehensive sequence-function maps obtained by deep sequencing can be used to reprogram interaction specificity and to leapfrog over bottlenecks in affinity maturation by combining many individually small contributions not detectable in conventional approaches. We use this approach to optimize two computationally designed inhibitors against H1N1 influenza hemagglutinin and, in both cases, obtain variants with subnanomolar binding affinity. The most potent of these, a 51-residue protein, is broadly cross-reactive against all influenza group 1 hemagglutinins, including human H2, and neutralizes H1N1 viruses with a potency that rivals that of several human monoclonal antibodies, demonstrating that computational design followedmore » by comprehensive energy landscape mapping can generate proteins with potential therapeutic utility.« less

  15. Methods to enable the design of bioactive small molecules targeting RNA

    PubMed Central

    Disney, Matthew D.; Yildirim, Ilyas; Childs-Disney, Jessica L.

    2014-01-01

    RNA is an immensely important target for small molecule therapeutics or chemical probes of function. However, methods that identify, annotate, and optimize RNA-small molecule interactions that could enable the design of compounds that modulate RNA function are in their infancies. This review describes recent approaches that have been developed to understand and optimize RNA motif-small molecule interactions, including Structure-Activity Relationships Through Sequencing (StARTS), quantitative structure-activity relationships (QSAR), chemical similarity searching, structure-based design and docking, and molecular dynamics (MD) simulations. Case studies described include the design of small molecules targeting RNA expansions, the bacterial A-site, viral RNAs, and telomerase RNA. These approaches can be combined to afford a synergistic method to exploit the myriad of RNA targets in the transcriptome. PMID:24357181

  16. Methods to enable the design of bioactive small molecules targeting RNA.

    PubMed

    Disney, Matthew D; Yildirim, Ilyas; Childs-Disney, Jessica L

    2014-02-21

    RNA is an immensely important target for small molecule therapeutics or chemical probes of function. However, methods that identify, annotate, and optimize RNA-small molecule interactions that could enable the design of compounds that modulate RNA function are in their infancies. This review describes recent approaches that have been developed to understand and optimize RNA motif-small molecule interactions, including structure-activity relationships through sequencing (StARTS), quantitative structure-activity relationships (QSAR), chemical similarity searching, structure-based design and docking, and molecular dynamics (MD) simulations. Case studies described include the design of small molecules targeting RNA expansions, the bacterial A-site, viral RNAs, and telomerase RNA. These approaches can be combined to afford a synergistic method to exploit the myriad of RNA targets in the transcriptome.

  17. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less

  18. Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.

    PubMed

    Flassig, R J; Sundmacher, K

    2012-12-01

    Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.

  19. Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Sen, S. K.

    2007-01-01

    Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *

  20. Fuzzy Naive Bayesian model for medical diagnostic decision support.

    PubMed

    Wagholikar, Kavishwar B; Vijayraghavan, Sundararajan; Deshpande, Ashok W

    2009-01-01

    This work relates to the development of computational algorithms to provide decision support to physicians. The authors propose a Fuzzy Naive Bayesian (FNB) model for medical diagnosis, which extends the Fuzzy Bayesian approach proposed by Okuda. A physician's interview based method is described to define a orthogonal fuzzy symptom information system, required to apply the model. For the purpose of elaboration and elicitation of characteristics, the algorithm is applied to a simple simulated dataset, and compared with conventional Naive Bayes (NB) approach. As a preliminary evaluation of FNB in real world scenario, the comparison is repeated on a real fuzzy dataset of 81 patients diagnosed with infectious diseases. The case study on simulated dataset elucidates that FNB can be optimal over NB for diagnosing patients with imprecise-fuzzy information, on account of the following characteristics - 1) it can model the information that, values of some attributes are semantically closer than values of other attributes, and 2) it offers a mechanism to temper exaggerations in patient information. Although the algorithm requires precise training data, its utility for fuzzy training data is argued for. This is supported by the case study on infectious disease dataset, which indicates optimality of FNB over NB for the infectious disease domain. Further case studies on large datasets are required to establish utility of FNB.

  1. Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method

    NASA Astrophysics Data System (ADS)

    Fitrianingsih, E.; Armellin, R.

    2018-04-01

    One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.

  2. MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, D; Lyu, Q; Ruan, D

    2016-06-15

    Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solvemore » the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.« less

  3. Random Versus Nonrandom Peer Review: A Case for More Meaningful Peer Review.

    PubMed

    Itri, Jason N; Donithan, Adam; Patel, Sohil H

    2018-05-10

    Random peer review programs are not optimized to discover cases with diagnostic error and thus have inherent limitations with respect to educational and quality improvement value. Nonrandom peer review offers an alternative approach in which diagnostic error cases are targeted for collection during routine clinical practice. The objective of this study was to compare error cases identified through random and nonrandom peer review approaches at an academic center. During the 1-year study period, the number of discrepancy cases and score of discrepancy were determined from each approach. The nonrandom peer review process collected 190 cases, of which 60 were scored as 2 (minor discrepancy), 94 as 3 (significant discrepancy), and 36 as 4 (major discrepancy). In the random peer review process, 1,690 cases were reviewed, of which 1,646 were scored as 1 (no discrepancy), 44 were scored as 2 (minor discrepancy), and none were scored as 3 or 4. Several teaching lessons and quality improvement measures were developed as a result of analysis of error cases collected through the nonrandom peer review process. Our experience supports the implementation of nonrandom peer review as a replacement to random peer review, with nonrandom peer review serving as a more effective method for collecting diagnostic error cases with educational and quality improvement value. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  4. Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem.

    PubMed

    Drinkwater, Benjamin; Charleston, Michael A

    2014-01-01

    Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with existing heuristics, it can assist in recovering a larger subset of the Pareto front than has previously been possible.

  5. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  6. Fractional Programming for Communication Systems—Part I: Power Control and Beamforming

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.

  7. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  8. Optimal selection of markers for validation or replication from genome-wide association studies.

    PubMed

    Greenwood, Celia M T; Rangrej, Jagadish; Sun, Lei

    2007-07-01

    With reductions in genotyping costs and the fast pace of improvements in genotyping technology, it is not uncommon for the individuals in a single study to undergo genotyping using several different platforms, where each platform may contain different numbers of markers selected via different criteria. For example, a set of cases and controls may be genotyped at markers in a small set of carefully selected candidate genes, and shortly thereafter, the same cases and controls may be used for a genome-wide single nucleotide polymorphism (SNP) association study. After such initial investigations, often, a subset of "interesting" markers is selected for validation or replication. Specifically, by validation, we refer to the investigation of associations between the selected subset of markers and the disease in independent data. However, it is not obvious how to choose the best set of markers for this validation. There may be a prior expectation that some sets of genotyping data are more likely to contain real associations. For example, it may be more likely for markers in plausible candidate genes to show disease associations than markers in a genome-wide scan. Hence, it would be desirable to select proportionally more markers from the candidate gene set. When a fixed number of markers are selected for validation, we propose an approach for identifying an optimal marker-selection configuration by basing the approach on minimizing the stratified false discovery rate. We illustrate this approach using a case-control study of colorectal cancer from Ontario, Canada, and we show that this approach leads to substantial reductions in the estimated false discovery rates in the Ontario dataset for the selected markers, as well as reductions in the expected false discovery rates for the proposed validation dataset. Copyright 2007 Wiley-Liss, Inc.

  9. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  10. Optimization of groundwater artificial recharge systems using a genetic algorithm: a case study in Beijing, China

    NASA Astrophysics Data System (ADS)

    Hao, Qichen; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Huang, Linxian

    2018-05-01

    An optimization approach is used for the operation of groundwater artificial recharge systems in an alluvial fan in Beijing, China. The optimization model incorporates a transient groundwater flow model, which allows for simulation of the groundwater response to artificial recharge. The facilities' operation with regard to recharge rates is formulated as a nonlinear programming problem to maximize the volume of surface water recharged into the aquifers under specific constraints. This optimization problem is solved by the parallel genetic algorithm (PGA) based on OpenMP, which could substantially reduce the computation time. To solve the PGA with constraints, the multiplicative penalty method is applied. In addition, the facilities' locations are implicitly determined on the basis of the results of the recharge-rate optimizations. Two scenarios are optimized and the optimal results indicate that the amount of water recharged into the aquifers will increase without exceeding the upper limits of the groundwater levels. Optimal operation of this artificial recharge system can also contribute to the more effective recovery of the groundwater storage capacity.

  11. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  12. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  13. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  14. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    PubMed

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  15. Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems

    PubMed Central

    Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.

    2016-01-01

    Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383

  16. A nonlinear bi-level programming approach for product portfolio management.

    PubMed

    Ma, Shuang

    2016-01-01

    Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.

  17. On the effects of alternative optima in context-specific metabolic model predictions

    PubMed Central

    Nikoloski, Zoran

    2017-01-01

    The integration of experimental data into genome-scale metabolic models can greatly improve flux predictions. This is achieved by restricting predictions to a more realistic context-specific domain, like a particular cell or tissue type. Several computational approaches to integrate data have been proposed—generally obtaining context-specific (sub)models or flux distributions. However, these approaches may lead to a multitude of equally valid but potentially different models or flux distributions, due to possible alternative optima in the underlying optimization problems. Although this issue introduces ambiguity in context-specific predictions, it has not been generally recognized, especially in the case of model reconstructions. In this study, we analyze the impact of alternative optima in four state-of-the-art context-specific data integration approaches, providing both flux distributions and/or metabolic models. To this end, we present three computational methods and apply them to two particular case studies: leaf-specific predictions from the integration of gene expression data in a metabolic model of Arabidopsis thaliana, and liver-specific reconstructions derived from a human model with various experimental data sources. The application of these methods allows us to obtain the following results: (i) we sample the space of alternative flux distributions in the leaf- and the liver-specific case and quantify the ambiguity of the predictions. In addition, we show how the inclusion of ℓ1-regularization during data integration reduces the ambiguity in both cases. (ii) We generate sets of alternative leaf- and liver-specific models that are optimal to each one of the evaluated model reconstruction approaches. We demonstrate that alternative models of the same context contain a marked fraction of disparate reactions. Further, we show that a careful balance between model sparsity and metabolic functionality helps in reducing the discrepancies between alternative models. Finally, our findings indicate that alternative optima must be taken into account for rendering the context-specific metabolic model predictions less ambiguous. PMID:28557990

  18. On the effects of alternative optima in context-specific metabolic model predictions.

    PubMed

    Robaina-Estévez, Semidán; Nikoloski, Zoran

    2017-05-01

    The integration of experimental data into genome-scale metabolic models can greatly improve flux predictions. This is achieved by restricting predictions to a more realistic context-specific domain, like a particular cell or tissue type. Several computational approaches to integrate data have been proposed-generally obtaining context-specific (sub)models or flux distributions. However, these approaches may lead to a multitude of equally valid but potentially different models or flux distributions, due to possible alternative optima in the underlying optimization problems. Although this issue introduces ambiguity in context-specific predictions, it has not been generally recognized, especially in the case of model reconstructions. In this study, we analyze the impact of alternative optima in four state-of-the-art context-specific data integration approaches, providing both flux distributions and/or metabolic models. To this end, we present three computational methods and apply them to two particular case studies: leaf-specific predictions from the integration of gene expression data in a metabolic model of Arabidopsis thaliana, and liver-specific reconstructions derived from a human model with various experimental data sources. The application of these methods allows us to obtain the following results: (i) we sample the space of alternative flux distributions in the leaf- and the liver-specific case and quantify the ambiguity of the predictions. In addition, we show how the inclusion of ℓ1-regularization during data integration reduces the ambiguity in both cases. (ii) We generate sets of alternative leaf- and liver-specific models that are optimal to each one of the evaluated model reconstruction approaches. We demonstrate that alternative models of the same context contain a marked fraction of disparate reactions. Further, we show that a careful balance between model sparsity and metabolic functionality helps in reducing the discrepancies between alternative models. Finally, our findings indicate that alternative optima must be taken into account for rendering the context-specific metabolic model predictions less ambiguous.

  19. A multidimensional model of optimal participation of children with physical disabilities.

    PubMed

    Kang, Lin-Ju; Palisano, Robert J; King, Gillian A; Chiarello, Lisa A

    2014-01-01

    To present a conceptual model of optimal participation in recreational and leisure activities for children with physical disabilities. The conceptualization of the model was based on review of contemporary theories and frameworks, empirical research and the authors' practice knowledge. A case scenario is used to illustrate application to practice. The model proposes that optimal participation in recreational and leisure activities involves the dynamic interaction of multiple dimensions and determinants of participation. The three dimensions of participation are physical, social and self-engagement. Determinants of participation encompass attributes of the child, family and environment. Experiences of optimal participation are hypothesized to result in long-term benefits including better quality of life, a healthier lifestyle and emotional and psychosocial well-being. Consideration of relevant child, family and environment determinants of dimensions of optimal participation should assist children, families and health care professionals to identify meaningful goals and outcomes and guide the selection and implementation of innovative therapy approaches and methods of service delivery. Implications for Rehabilitation Optimal participation is proposed to involve the dynamic interaction of physical, social and self-engagement and attributes of the child, family and environment. The model emphasizes the importance of self-perceptions and participation experiences of children with physical disabilities. Optimal participation may have a positive influence on quality of life, a healthy lifestyle and emotional and psychosocial well-being. Knowledge of child, family, and environment determinants of physical, social and self-engagement should assist children, families and professionals in identifying meaningful goals and guiding innovative therapy approaches.

  20. Aeroelastic Optimization Study Based on the X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-Gi

    2014-01-01

    One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.

  1. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  2. Improvement of Automated POST Case Success Rate Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Zwack, Matthew R.; Dees, Patrick D.

    2017-01-01

    During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only an 8% pass rate, tens or hundreds of thousands of reps may be needed to be confident that the best repetition is at least close to the global optima. However, typical design study time constraints require that fewer repetitions be attempted, sometimes resulting in seed points that have only a handful of successful completions. If a small number of successful repetitions are used to generate a seed point, the graph method may inherit some inaccuracies as it chains DOE cases from the non-global-optimal seed points. This creates inherent noise in the graph data, which can limit the accuracy of the resulting surrogate models. For this reason, the goal of this work is to improve the seed point generation method and ultimately the accuracy of the resulting POST surrogate model. The work focuses on increasing the case pass rate for seed point generation.

  3. A continuous optimization approach for inferring parameters in mathematical models of regulatory networks.

    PubMed

    Deng, Zhimin; Tian, Tianhai

    2014-07-29

    The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.

  4. Scalable Heuristics for Planning, Placement and Sizing of Flexible AC Transmission System Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, Vladmir; Backhaus, Scott N.; Chertkov, Michael

    Aiming to relieve transmission grid congestion and improve or extend feasibility domain of the operations, we build optimization heuristics, generalizing standard AC Optimal Power Flow (OPF), for placement and sizing of Flexible Alternating Current Transmission System (FACTS) devices of the Series Compensation (SC) and Static VAR Compensation (SVC) type. One use of these devices is in resolving the case when the AC OPF solution does not exist because of congestion. Another application is developing a long-term investment strategy for placement and sizing of the SC and SVC devices to reduce operational cost and improve power system operation. SC and SVCmore » devices are represented by modification of the transmission line inductances and reactive power nodal corrections respectively. We find one placement and sizing of FACTs devices for multiple scenarios and optimal settings for each scenario simultaneously. Our solution of the nonlinear and nonconvex generalized AC-OPF consists of building a convergent sequence of convex optimizations containing only linear constraints and shows good computational scaling to larger systems. The approach is illustrated on single- and multi-scenario examples of the Matpower case-30 model.« less

  5. Grid generation and adaptation via Monge-Kantorovich optimization in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Delzanno, Gian Luca; Chacon, Luis; Finn, John M.

    2008-11-01

    In a recent paper [1], Monge-Kantorovich (MK) optimization was proposed as a method of grid generation/adaptation in two dimensions (2D). The method is based on the minimization of the L2 norm of grid point displacement, constrained to producing a given positive-definite cell volume distribution (equidistribution constraint). The procedure gives rise to the Monge-Amp'ere (MA) equation: a single, non-linear scalar equation with no free-parameters. The MA equation was solved in Ref. [1] with the Jacobian Free Newton-Krylov technique and several challenging test cases were presented in squared domains in 2D. Here, we extend the work of Ref. [1]. We first formulate the MK approach in physical domains with curved boundary elements and in 3D. We then show the results of applying it to these more general cases. We show that MK optimization produces optimal grids in which the constraint is satisfied numerically to truncation error. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, submitted to Journal of Computational Physics (2008).

  6. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    PubMed

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.

  7. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

    PubMed Central

    Lee, Leng-Feng

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184

  8. Improving spacecraft design using a multidisciplinary design optimization methodology

    NASA Astrophysics Data System (ADS)

    Mosher, Todd Jon

    2000-10-01

    Spacecraft design has gone from maximizing performance under technology constraints to minimizing cost under performance constraints. This is characteristic of the "faster, better, cheaper" movement that has emerged within NASA. Currently spacecraft are "optimized" manually through a tool-assisted evaluation of a limited set of design alternatives. With this approach there is no guarantee that a systems-level focus will be taken and "feasibility" rather than "optimality" is commonly all that is achieved. To improve spacecraft design in the "faster, better, cheaper" era, a new approach using multidisciplinary design optimization (MDO) is proposed. Using MDO methods brings structure to conceptual spacecraft design by casting a spacecraft design problem into an optimization framework. Then, through the construction of a model that captures design and cost, this approach facilitates a quicker and more straightforward option synthesis. The final step is to automatically search the design space. As computer processor speed continues to increase, enumeration of all combinations, while not elegant, is one method that is straightforward to perform. As an alternative to enumeration, genetic algorithms are used and find solutions by reviewing fewer possible solutions with some limitations. Both methods increase the likelihood of finding an optimal design, or at least the most promising area of the design space. This spacecraft design methodology using MDO is demonstrated on three examples. A retrospective test for validation is performed using the Near Earth Asteroid Rendezvous (NEAR) spacecraft design. For the second example, the premise that aerobraking was needed to minimize mission cost and was mission enabling for the Mars Global Surveyor (MGS) mission is challenged. While one might expect no feasible design space for an MGS without aerobraking mission, a counterintuitive result is discovered. Several design options that don't use aerobraking are feasible and cost effective. The third example is an original commercial lunar mission entitled Eagle-eye. This example shows how an MDO approach is applied to an original mission with a larger feasible design space. It also incorporates a simplified business case analysis.

  9. SU-E-T-549: A Combinatorial Optimization Approach to Treatment Planning with Non-Uniform Fractions in Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, D; Unkelbach, J

    2014-06-01

    Purpose: Non-uniform fractionation, i.e. delivering distinct dose distributions in two subsequent fractions, can potentially improve outcomes by increasing biological dose to the target without increasing dose to healthy tissues. This is possible if both fractions deliver a similar dose to normal tissues (exploit the fractionation effect) but high single fraction doses to subvolumes of the target (hypofractionation). Optimization of such treatment plans can be formulated using biological equivalent dose (BED), but leads to intractable nonconvex optimization problems. We introduce a novel optimization approach to address this challenge. Methods: We first optimize a reference IMPT plan using standard techniques that deliversmore » a homogeneous target dose in both fractions. The method then divides the pencil beams into two sets, which are assigned to either fraction one or fraction two. The total intensity of each pencil beam, and therefore the physical dose, remains unchanged compared to the reference plan. The objectives are to maximize the mean BED in the target and to minimize the mean BED in normal tissues, which is a quadratic function of the pencil beam weights. The optimal reassignment of pencil beams to one of the two fractions is formulated as a binary quadratic optimization problem. A near-optimal solution to this problem can be obtained by convex relaxation and randomized rounding. Results: The method is demonstrated for a large arteriovenous malformation (AVM) case treated in two fractions. The algorithm yields a treatment plan, which delivers a high dose to parts of the AVM in one of the fractions, but similar doses in both fractions to the normal brain tissue adjacent to the AVM. Using the approach, the mean BED in the target was increased by approximately 10% compared to what would have been possible with a uniform reference plan for the same normal tissue mean BED.« less

  10. Multicriteria plan optimization in the hands of physicians: a pilot study in prostate cancer and brain tumors.

    PubMed

    Müller, Birgit S; Shih, Helen A; Efstathiou, Jason A; Bortfeld, Thomas; Craft, David

    2017-11-06

    The purpose of this study was to demonstrate the feasibility of physician driven planning in intensity modulated radiotherapy (IMRT) with a multicriteria optimization (MCO) treatment planning system and template based plan optimization. Exploiting the full planning potential of MCO navigation, this alternative planning approach intends to improve planning efficiency and individual plan quality. Planning was retrospectively performed on 12 brain tumor and 10 post-prostatectomy prostate patients previously treated with MCO-IMRT. For each patient, physicians were provided with a template-based generated Pareto surface of optimal plans to navigate, using the beam angles from the original clinical plans. We compared physician generated plans to clinically delivered plans (created by dosimetrists) in terms of dosimetric differences, physician preferences and planning times. Plan qualities were similar, however physician generated and clinical plans differed in the prioritization of clinical goals. Physician derived prostate plans showed significantly better sparing of the high dose rectum and bladder regions (p(D1) < 0.05; D1: dose received by 1% of the corresponding structure). Physicians' brain tumor plans indicated higher doses for targets and brainstem (p(D1) < 0.05). Within blinded plan comparisons physicians preferred the clinical plans more often (brain: 6:3 out of 12, prostate: 2:6 out of 10) (not statistically significant). While times of physician involvement were comparable for prostate planning, the new workflow reduced the average involved time for brain cases by 30%. Planner times were reduced for all cases. Subjective benefits, such as a better understanding of planning situations, were observed by clinicians through the insight into plan optimization and experiencing dosimetric trade-offs. We introduce physician driven planning with MCO for brain and prostate tumors as a feasible planning workflow. The proposed approach standardizes the planning process by utilizing site specific templates and integrates physicians more tightly into treatment planning. Physicians' navigated plan qualities were comparable to the clinical plans. Given the reduction of planning time of the planner and the equal or lower planning time of physicians, this approach has the potential to improve departmental efficiencies.

  11. Sensitivity Analysis and Optimization of Enclosure Radiation with Applications to Crystal Growth

    NASA Technical Reports Server (NTRS)

    Tiller, Michael M.

    1995-01-01

    In engineering, simulation software is often used as a convenient means for carrying out experiments to evaluate physical systems. The benefit of using simulations as 'numerical' experiments is that the experimental conditions can be easily modified and repeated at much lower cost than the comparable physical experiment. The goal of these experiments is to 'improve' the process or result of the experiment. In most cases, the computational experiments employ the same trial and error approach as their physical counterparts. When using this approach for complex systems, the cause and effect relationship of the system may never be fully understood and efficient strategies for improvement never utilized. However, it is possible when running simulations to accurately and efficiently determine the sensitivity of the system results with respect to simulation to accurately and efficiently determine the sensitivity of the system results with respect to simulation parameters (e.g., initial conditions, boundary conditions, and material properties) by manipulating the underlying computations. This results in a better understanding of the system dynamics and gives us efficient means to improve processing conditions. We begin by discussing the steps involved in performing simulations. Then we consider how sensitivity information about simulation results can be obtained and ways this information may be used to improve the process or result of the experiment. Next, we discuss optimization and the efficient algorithms which use sensitivity information. We draw on all this information to propose a generalized approach for integrating simulation and optimization, with an emphasis on software programming issues. After discussing our approach to simulation and optimization we consider an application involving crystal growth. This application is interesting because it includes radiative heat transfer. We discuss the computation of radiative new factors and the impact this mode of heat transfer has on our approach. Finally, we will demonstrate the results of our optimization.

  12. Intervention in gene regulatory networks with maximal phenotype alteration.

    PubMed

    Yousefi, Mohammadmahdi R; Dougherty, Edward R

    2013-07-15

    A basic issue for translational genomics is to model gene interaction via gene regulatory networks (GRNs) and thereby provide an informatics environment to study the effects of intervention (say, via drugs) and to derive effective intervention strategies. Taking the view that the phenotype is characterized by the long-run behavior (steady-state distribution) of the network, we desire interventions to optimally move the probability mass from undesirable to desirable states Heretofore, two external control approaches have been taken to shift the steady-state mass of a GRN: (i) use a user-defined cost function for which desirable shift of the steady-state mass is a by-product and (ii) use heuristics to design a greedy algorithm. Neither approach provides an optimal control policy relative to long-run behavior. We use a linear programming approach to optimally shift the steady-state mass from undesirable to desirable states, i.e. optimization is directly based on the amount of shift and therefore must outperform previously proposed methods. Moreover, the same basic linear programming structure is used for both unconstrained and constrained optimization, where in the latter case, constraints on the optimization limit the amount of mass that may be shifted to 'ambiguous' states, these being states that are not directly undesirable relative to the pathology of interest but which bear some perceived risk. We apply the method to probabilistic Boolean networks, but the theory applies to any Markovian GRN. Supplementary materials, including the simulation results, MATLAB source code and description of suboptimal methods are available at http://gsp.tamu.edu/Publications/supplementary/yousefi13b. edward@ece.tamu.edu Supplementary data are available at Bioinformatics online.

  13. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    PubMed

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  14. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    NASA Astrophysics Data System (ADS)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  15. Sound-field reproduction in-room using optimal control techniques: simulations in the frequency domain.

    PubMed

    Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw

    2005-02-01

    This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the room's natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.

  16. Policy tree optimization for adaptive management of water resources systems

    NASA Astrophysics Data System (ADS)

    Herman, Jonathan; Giuliani, Matteo

    2017-04-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points" that suggest the need of updating the policy. However, there remains a need for a general method to optimize the choice of the signposts to be used and their threshold values. This work contributes a general framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. Given a set of feature variables (e.g., reservoir level, inflow observations, inflow forecasts), the resulting policy defines both the optimal reservoir operations and the conditions under which such operations should be triggered. We demonstrate the approach using Folsom Reservoir (California) as a case study, in which operating policies must balance the risk of both floods and droughts. Numerical results show that the tree-based policies outperform the ones designed via Dynamic Programming. In addition, they display good adaptive capacity to the changing climate, successfully adapting the reservoir operations across a large set of uncertain climate scenarios.

  17. Validating the Performance of the FHWA Work Zone Model Version 1.0: A Case Study Along I-91 in Springfield, Massachusetts

    DOT National Transportation Integrated Search

    2017-08-01

    Central to the effective design of work zones is being able to understand how drivers behave as they approach and enter a work zone area. States use simulation tools in modeling freeway work zones to predict work zone impacts and to select optimal de...

  18. Stuttering Treatment for a School-Age Child with Down Syndrome: A Descriptive Case Report

    ERIC Educational Resources Information Center

    Harasym, Jessica; Langevin, Marilyn

    2012-01-01

    Background: Little is known about optimal treatment approaches and stuttering treatment outcomes for children with Down syndrome. Aims and method: The purpose of this study was to investigate outcomes for a child with Down syndrome who received a combination of fluency shaping therapy and parent delivered contingencies for normally fluent speech,…

  19. Understanding the Development of Mathematical Work in the Context of the Classroom

    ERIC Educational Resources Information Center

    Kuzniak, Alain; Nechache, Assia; Drouhard, J. P.

    2016-01-01

    According to our approach to mathematics education, the optimal aim of the teaching of mathematics is to assist students in achieving efficient mathematical work. But, what does efficient exactly mean in that case? And how can teachers reach this objective? The model of Mathematical Working Spaces with its three dimensions--semiotic, instrumental,…

  20. Optimization of parameter values for complex pulse sequences by simulated annealing: application to 3D MP-RAGE imaging of the brain.

    PubMed

    Epstein, F H; Mugler, J P; Brookeman, J R

    1994-02-01

    A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.

  1. Optimizing DER Participation in Inertial and Primary-Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop

    This paper develops an approach to enable the optimal participation of distributed energy resources (DERs) in inertial and primary-frequency response alongside conventional synchronous generators. Leveraging a reduced-order model description of frequency dynamics, DERs' synthetic inertias and droop coefficients are designed to meet time-domain performance objectives of frequency overshoot and steady-state regulation. Furthermore, an optimization-based method centered around classical economic dispatch is developed to ensure that DERs share the power injections for inertial- and primary-frequency response in proportion to their power ratings. Simulations for a modified New England test-case system composed of ten synchronous generators and six instances of the IEEEmore » 37-node test feeder with frequency-responsive DERs validate the design strategy.« less

  2. Optimal intervention strategies for cholera outbreak by education and chlorination

    NASA Astrophysics Data System (ADS)

    Bakhtiar, Toni

    2016-01-01

    This paper discusses the control of infectious diseases in the framework of optimal control approach. A case study on cholera control was studied by considering two control strategies, namely education and chlorination. We distinct the former control into one regarding person-to-person behaviour and another one concerning person-to-environment conduct. Model are divided into two interacted populations: human population which follows an SIR model and pathogen population. Pontryagin maximum principle was applied in deriving a set of differential equations which consists of dynamical and adjoin systems as optimality conditions. Then, the fourth order Runge-Kutta method was exploited to numerically solve the equation system. An illustrative example was provided to assess the effectiveness of the control strategies toward a set of control scenarios.

  3. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr; Li, Juan, E-mail: juanli@sdu.edu.cn; Ma, Jin, E-mail: jinma@usc.edu

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and wemore » extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.« less

  5. Long range personalized cancer treatment strategies incorporating evolutionary dynamics.

    PubMed

    Yeang, Chen-Hsiang; Beckman, Robert A

    2016-10-22

    Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.

  6. How to resolve the SLOSS debate: lessons from species-diversity models.

    PubMed

    Tjørve, Even

    2010-05-21

    The SLOSS debate--whether a single large reserve will conserve more species than several small--of the 1970s and 1980s never came to a resolution. The first rule of reserve design states that one large reserve will conserve the most species, a rule which has been heavily contested. Empirical data seem to undermine the reliance on general rules, indicating that the best strategy varies from case to case. Modeling has also been deployed in this debate. We may divide the modeling approaches to the SLOSS enigma into dynamic and static approaches. Dynamic approaches, covered by the fields of island equilibrium theory of island biogeography and metapopulation theory, look at immigration, emigration, and extinction. Static approaches, such as the one in this paper, illustrate how several factors affect the number of reserves that will save the most species. This article approaches the effect of different factors by the application of species-diversity models. These models combine species-area curves for two or more reserves, correcting for the species overlap between them. Such models generate several predictions on how different factors affect the optimal number of reserves. The main predictions are: Fewer and larger reserves are favored by increased species overlap between reserves, by faster growth in number of species with reserve area increase, by higher minimum-area requirements, by spatial aggregation and by uneven species abundances. The effect of increased distance between smaller reserves depends on the two counteracting factors: decreased species density caused by isolation (which enhances minimum-area effect) and decreased overlap between isolates. The first decreases the optimal number of reserves; the second increases the optimal number. The effect of total reserve-system area depends both on the shape of the species-area curve and on whether overlap between reserves changes with scale. The approach to modeling presented here has several implications for conservational strategies. It illustrates well how the SLOSS enigma can be reduced to a question of the shape of the species-area curve that is expected or generated from reserves of different sizes and a question of overlap between isolates (or reserves). Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  7. Visualising Pareto-optimal trade-offs helps move beyond monetary-only criteria for water management decisions

    NASA Astrophysics Data System (ADS)

    Hurford, Anthony; Harou, Julien

    2014-05-01

    Water related eco-system services are important to the livelihoods of the poorest sectors of society in developing countries. Degradation or loss of these services can increase the vulnerability of people decreasing their capacity to support themselves. New approaches to help guide water resources management decisions are needed which account for the non-market value of ecosystem goods and services. In case studies from Brazil and Kenya we demonstrate the capability of many objective Pareto-optimal trade-off analysis to help decision makers balance economic and non-market benefits from the management of existing multi-reservoir systems. A multi-criteria search algorithm is coupled to a water resources management simulator of each basin to generate a set of Pareto-approximate trade-offs representing the best case management decisions. In both cases, volume dependent reservoir release rules are the management decisions being optimised. In the Kenyan case we further assess the impacts of proposed irrigation investments, and how the possibility of new investments impacts the system's trade-offs. During the multi-criteria search (optimisation), performance of different sets of management decisions (policies) is assessed against case-specific objective functions representing provision of water supply and irrigation, hydropower generation and maintenance of ecosystem services. Results are visualised as trade-off surfaces to help decision makers understand the impacts of different policies on a broad range of stakeholders and to assist in decision-making. These case studies show how the approach can reveal unexpected opportunities for win-win solutions, and quantify the trade-offs between investing to increase agricultural revenue and negative impacts on protected ecosystems which support rural livelihoods.

  8. Real-time terminal area trajectory planning for runway independent aircraft

    NASA Astrophysics Data System (ADS)

    Xue, Min

    The increasing demand for commercial air transportation results in delays due to traffic queues that form bottlenecks along final approach and departure corridors. In urban areas, it is often infeasible to build new runways, and regardless of automation upgrades traffic must remain separated to avoid the wakes of previous aircraft. Vertical or short takeoff and landing aircraft as Runway Independent Aircraft (RIA) can increase passenger throughput at major urban airports via the use of vertiports or stub runways. The concept of simultaneous non-interfering (SNI) operations has been proposed to reduce traffic delays by creating approach and departure corridors that do not intersect existing fixed-wing routes. However, SNI trajectories open new routes that may overfly noise-sensitive areas, and RIA may generate more noise than traditional jet aircraft, particularly on approach. In this dissertation, we develop efficient SNI noise abatement procedures applicable to RIA. First, we introduce a methodology based on modified approximated cell-decomposition and Dijkstra's search algorithm to optimize longitudinal plane (2-D) RIA trajectories over a cost function that minimizes noise, time, and fuel use. Then, we extend the trajectory optimization model to 3-D with a k-ary tree as the discrete search space. We incorporate geography information system (GIS) data, specifically population, into our objective function, and focus on a practical case study: the design of SNI RIA approach procedures to Baltimore-Washington International airport. Because solutions were represented as trim state sequences, we incorporated smooth transition between segments to enable more realistic cost estimates. Due to the significant computational complexity, we investigated alternative more efficient optimization techniques applicable to our nonlinear, non-convex, heavily constrained, and discontinuous objective function. Comparing genetic algorithm (GA) and adaptive simulated annealing (ASA) with our original Dijkstra's algorithm, ASA is identified as the most efficient algorithm for terminal area trajectory optimization. The effects of design parameter discretization are analyzed, with results indicating a SNI procedure with 3-4 segments effectively balances simplicity with cost minimization. Finally, pilot control commands were implemented and generated via optimization-base inverse simulation to validate execution of the optimal approach trajectories.

  9. Phase averaging method for the modeling of the multiprobe and cutaneous cryosurgery

    NASA Astrophysics Data System (ADS)

    E Shilnikov, K.; Kudryashov, N. A.; Y Gaiur, I.

    2017-12-01

    In this paper we consider the problem of planning and optimization of the cutaneous and multiprobe cryosurgery operations. An explicit scheme based on the finite volume approximation of phase averaged Pennes bioheat transfer model is applied. The flux relaxation method is used for the stability improvement of scheme. Skin tissue is considered as strongly inhomogeneous media. Computerized planning tool is tested on model cryotip-based and cutaneous cryosurgery problems. For the case of cutaneous cryosurgery the method of an additional freezing element mounting is studied as an approach to optimize the cellular necrosis front propagation.

  10. Pilot-optimal augmentation synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1978-01-01

    An augmentation synthesis method usable in the absence of quantitative handling qualities specifications, and yet explicitly including design objectives based on pilot-rating concepts, is presented. The algorithm involves the unique approach of simultaneously solving for the stability augmentation system (SAS) gains, pilot equalization and pilot rating prediction via optimal control techniques. Simultaneous solution is required in this case since the pilot model (gains, etc.) depends upon the augmented plant dynamics, and the augmentation is obviously not a priori known. Another special feature is the use of the pilot's objective function (from which the pilot model evolves) to design the SAS.

  11. Maximizing Exposure Therapy: An Inhibitory Learning Approach

    PubMed Central

    Craske, Michelle G.; Treanor, Michael; Conway, Chris; Zbozinek, Tomislav; Vervliet, Bram

    2014-01-01

    Exposure therapy is an effective approach for treating anxiety disorders, although a substantial number of individuals fail to benefit or experience a return of fear after treatment. Research suggests that anxious individuals show deficits in the mechanisms believed to underlie exposure therapy, such as inhibitory learning. Targeting these processes may help improve the efficacy of exposure-based procedures. Although evidence supports an inhibitory learning model of extinction, there has been little discussion of how to implement this model in clinical practice. The primary aim of this paper is to provide examples to clinicians for how to apply this model to optimize exposure therapy with anxious clients, in ways that distinguish it from a ‘fear habituation’ approach and ‘belief disconfirmation’ approach within standard cognitive-behavior therapy. Exposure optimization strategies include 1) expectancy violation, 2) deepened extinction, 3) occasional reinforced extinction, 4) removal of safety signals, 5) variability, 6) retrieval cues, 7) multiple contexts, and 8) affect labeling. Case studies illustrate methods of applying these techniques with a variety of anxiety disorders, including obsessive-compulsive disorder, posttraumatic stress disorder, social phobia, specific phobia, and panic disorder. PMID:24864005

  12. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  13. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  14. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  15. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  16. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  17. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  18. Portfolio Optimization of Nanomaterial Use in Clean Energy Technologies.

    PubMed

    Moore, Elizabeth A; Babbitt, Callie W; Gaustad, Gabrielle; Moore, Sean T

    2018-04-03

    While engineered nanomaterials (ENMs) are increasingly incorporated in diverse applications, risks of ENM adoption remain difficult to predict and mitigate proactively. Current decision-making tools do not adequately account for ENM uncertainties including varying functional forms, unique environmental behavior, economic costs, unknown supply and demand, and upstream emissions. The complexity of the ENM system necessitates a novel approach: in this study, the adaptation of an investment portfolio optimization model is demonstrated for optimization of ENM use in renewable energy technologies. Where a traditional investment portfolio optimization model maximizes return on investment through optimal selection of stock, ENM portfolio optimization maximizes the performance of energy technology systems by optimizing selective use of ENMs. Cumulative impacts of multiple ENM material portfolios are evaluated in two case studies: organic photovoltaic cells (OPVs) for renewable energy and lithium-ion batteries (LIBs) for electric vehicles. Results indicate ENM adoption is dependent on overall performance and variance of the material, resource use, environmental impact, and economic trade-offs. From a sustainability perspective, improved clean energy applications can help extend product lifespans, reduce fossil energy consumption, and substitute ENMs for scarce incumbent materials.

  19. Merging spatially variant physical process models under an optimized systems dynamics framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, William O.; Lowry, Thomas Stephen; Pierce, Suzanne A.

    The complexity of water resource issues, its interconnectedness to other systems, and the involvement of competing stakeholders often overwhelm decision-makers and inhibit the creation of clear management strategies. While a range of modeling tools and procedures exist to address these problems, they tend to be case specific and generally emphasize either a quantitative and overly analytic approach or present a qualitative dialogue-based approach lacking the ability to fully explore consequences of different policy decisions. The integration of these two approaches is needed to drive toward final decisions and engender effective outcomes. Given these limitations, the Computer Assisted Dispute Resolution systemmore » (CADRe) was developed to aid in stakeholder inclusive resource planning. This modeling and negotiation system uniquely addresses resource concerns by developing a spatially varying system dynamics model as well as innovative global optimization search techniques to maximize outcomes from participatory dialogues. Ultimately, the core system architecture of CADRe also serves as the cornerstone upon which key scientific innovation and challenges can be addressed.« less

  20. Optimized operation of dielectric laser accelerators: Single bunch

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-05-01

    We introduce a general approach to determine the optimal charge, efficiency and gradient for laser driven accelerators in a self-consistent way. We propose a way to enhance the operational gradient of dielectric laser accelerators by leverage of beam-loading effect. While the latter may be detrimental from the perspective of the effective gradient experienced by the particles, it can be beneficial as the effective field experienced by the accelerating structure, is weaker. As a result, the constraint imposed by the damage threshold fluence is accordingly weakened and our self-consistent approach predicts permissible gradients of ˜10 GV /m , one order of magnitude higher than previously reported experimental results—with unbunched pulse of electrons. Our approach leads to maximum efficiency to occur for higher gradients as compared with a scenario in which the beam-loading effect on the material is ignored. In any case, maximum gradient does not occur for the same conditions that maximum efficiency does—a trade-off set of parameters is suggested.

  1. The German emergency and disaster medicine and management system-history and present.

    PubMed

    Hecker, Norman; Domres, Bernd Dieter

    2018-04-01

    As well for optimized emergency management in individual cases as for optimized mass medicine in disaster management, the principle of the medical doctors approaching the patient directly and timely, even close to the site of the incident, is a long-standing marker for quality of care and patient survival in Germany. Professional rescue and emergency forces, including medical services, are the "Golden Standard" of emergency management systems. Regulative laws, proper organization of resources, equipment, training and adequate delivery of medical measures are key factors in systematic approaches to manage emergencies and disasters alike and thus save lives. During disasters command, communication, coordination and cooperation are essential to cope with extreme situations, even more so in a globalized world. In this article, we describe the major historical milestones, the current state of the German system in emergency and disaster management and its integration into the broader European approach. Copyright © 2018. Production and hosting by Elsevier B.V.

  2. Optimal savings and the value of population.

    PubMed

    Arrow, Kenneth J; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P

    2007-11-20

    We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium.

  3. Optimal savings and the value of population

    PubMed Central

    Arrow, Kenneth J.; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P.

    2007-01-01

    We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium. PMID:17984059

  4. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Optimal filter design with progressive genetic algorithm for local damage detection in rolling bearings

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Zimroz, Radoslaw

    2018-03-01

    Harsh industrial conditions present in underground mining cause a lot of difficulties for local damage detection in heavy-duty machinery. For vibration signals one of the most intuitive approaches of obtaining signal with expected properties, such as clearly visible informative features, is prefiltration with appropriately prepared filter. Design of such filter is very broad field of research on its own. In this paper authors propose a novel approach to dedicated optimal filter design using progressive genetic algorithm. Presented method is fully data-driven and requires no prior knowledge of the signal. It has been tested against a set of real and simulated data. Effectiveness of operation has been proven for both healthy and damaged case. Termination criterion for evolution process was developed, and diagnostic decision making feature has been proposed for final result determinance.

  6. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1987-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  7. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  8. SU-E-T-367: Optimization of DLG Using TG-119 Test Cases and a Weighted Mean Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sintay, B; Vanderstraeten, C; Terrell, J

    2014-06-01

    Purpose: Optimization of the dosimetric leaf gap (DLG) is an important step in commissioning the Eclipse treatment planning system for sliding window intensity-modulated radiation therapy (SW-IMRT) and RapidArc. Often the values needed for optimal dose delivery differ markedly from those measured at commissioning. We present a method to optimize this value using the AAPM TG-119 test cases. Methods: For SW-IMRT and RapidArc, TG-119 based test plans were created using a water-equivalent phantom. Dose distributions measured on film and ion chamber (IC) readings taken in low-gradient regions within the targets were analyzed separately. Since DLG is a single value per energy,more » SW-IMRT and RapidArc must be considered simultaneously. Plans were recalculated using a linear sweep from 0.02cm (the minimum DLG) to 0.3 cm. The calculated point doses were compared to the measured doses for each plan, and based on these comparisons an optimal DLG value was computed for each plan. TG-119 cases are designed to push the system in various ways, thus, a weighted mean of the DLG was computed where the relative importance of each type of plan was given a score from 0.0 to 1.0. Finally, SW-IMRT and RapidArc are assigned an overall weight based on clinical utilization. Our routine patient-QA (PQA) process was performed as independent validation. Results: For a Varian TrueBeam, the optimized DLG varied with σ = 0.044cm for SW-IMRT and σ = 0.035cm for RapidArc. The difference between the weighted mean SW-IMRT and RapidArc value was 0.038cm. We predicted utilization of 25% SW-IMRT and 75% RapidArc. The resulting DLG was ~1mm different than that found by commissioning and produced an average error of <1% for SW-IMRT and RapidArc PQA test cases separately. Conclusion: The weighted mean method presented is a useful tool for determining an optimal DLG value for commissioning Eclipse.« less

  9. Anesthetic approach to high-risk patients and prolonged awake craniotomy using dexmedetomidine and scalp block.

    PubMed

    Garavaglia, Marco M; Das, Sunit; Cusimano, Michael D; Crescini, Charmagne; Mazer, C David; Hare, Gregory M T; Rigamonti, Andrea

    2014-07-01

    Awake craniotomy with intraoperative speech or motor testing is relatively contraindicated in cases requiring prolonged operative times and in patients with severe medical comorbidities including anxiety, anticipated difficult airway, obesity, large tumors, and intracranial hypertension. The anesthetic management of neurosurgical patients who possess these contraindications but would be optimally treated by an awake procedure remains unclear. We describe a new anesthetic approach for awake craniotomy that did not require any airway manipulation, utilizing a bupivacaine-based scalp nerve block, and dexmedetomidine as the primary hypnotic-sedative agent. Using this technique, we provided optimal operative conditions to perform awake craniotomy facilitating safe tumor resection, while utilizing intraoperative electrocorticography for motor and speech mapping in a cohort of 10 patients at a high risk for airway compromise and complications associated with patient comorbidities. All patients underwent successful awake craniotomy, intraoperative mapping, and tumor resection with adequate sedation for up to 9 hours (median 3.5 h, range 3 to 9 h) without any loss of neurological function, airway competency, or the need to provide any active rescue airway management. We report 4 of these cases that highlight our experience: 1 case required prolonged surgery because of the complexity of tumor resection and 3 patients had important medical comorbidities and/or relative contraindication for an awake procedure. Dexmedetomidine, with concurrent scalp block, is an effective and safe anesthetic approach for awake craniotomy. Dexmedetomidine facilitates the extension procedure complexity and duration in patients who might traditionally not be considered to be candidates for this procedure.

  10. Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung

    NASA Astrophysics Data System (ADS)

    Yong, Benny; Chin, Liem

    2017-05-01

    Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.

  11. An Exploration of Trainer Filtering Approaches

    NASA Technical Reports Server (NTRS)

    Hester, Patrick; Tolk, Andreas; Gadi, Sandeep; Carver, Quinn; Roland, Philippe

    2011-01-01

    Simutator operators face a twofold entity management problem during Live-Virtual-Constructive (LVC) training events. They first must filter potentially hundreds of thousands of simulation entities in order 10 determine which elements are necessary for optimal trainee comprehension. Secondarily, they must manage the number of entities entering the simulation from those present in the object model in order to limit the computational burden on the simulation system and prevent unnecessary entities from entering the simulation, This paper focuses on the first filtering stage and describes a novel approach to entity filtering undertaken to maximize trainee awareness and learning. The feasibility of this novel approach is demonstrated on a case study and limitations to the proposed approach and future work are discussed.

  12. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  13. Evaluation of Different Normalization and Analysis Procedures for Illumina Gene Expression Microarray Data Involving Small Changes

    PubMed Central

    Johnstone, Daniel M.; Riveros, Carlos; Heidari, Moones; Graham, Ross M.; Trinder, Debbie; Berretta, Regina; Olynyk, John K.; Scott, Rodney J.; Moscato, Pablo; Milward, Elizabeth A.

    2013-01-01

    While Illumina microarrays can be used successfully for detecting small gene expression changes due to their high degree of technical replicability, there is little information on how different normalization and differential expression analysis strategies affect outcomes. To evaluate this, we assessed concordance across gene lists generated by applying different combinations of normalization strategy and analytical approach to two Illumina datasets with modest expression changes. In addition to using traditional statistical approaches, we also tested an approach based on combinatorial optimization. We found that the choice of both normalization strategy and analytical approach considerably affected outcomes, in some cases leading to substantial differences in gene lists and subsequent pathway analysis results. Our findings suggest that important biological phenomena may be overlooked when there is a routine practice of using only one approach to investigate all microarray datasets. Analytical artefacts of this kind are likely to be especially relevant for datasets involving small fold changes, where inherent technical variation—if not adequately minimized by effective normalization—may overshadow true biological variation. This report provides some basic guidelines for optimizing outcomes when working with Illumina datasets involving small expression changes. PMID:27605185

  14. Include dispersion in quantum chemical modeling of enzymatic reactions: the case of isoaspartyl dipeptidase.

    PubMed

    Zhang, Hai-Mei; Chen, Shi-Lu

    2015-06-09

    The lack of dispersion in the B3LYP functional has been proposed to be the main origin of big errors in quantum chemical modeling of a few enzymes and transition metal complexes. In this work, the essential dispersion effects that affect quantum chemical modeling are investigated. With binuclear zinc isoaspartyl dipeptidase (IAD) as an example, dispersion is included in the modeling of enzymatic reactions by two different procedures, i.e., (i) geometry optimizations followed by single-point calculations of dispersion (approach I) and (ii) the inclusion of dispersion throughout geometry optimization and energy evaluation (approach II). Based on a 169-atom chemical model, the calculations show a qualitative consistency between approaches I and II in energetics and most key geometries, demonstrating that both approaches are available with the latter preferential since both geometry and energy are dispersion-corrected in approach II. When a smaller model without Arg233 (147 atoms) was used, an inconsistency was observed, indicating that the missing dispersion interactions are essentially responsible for determining equilibrium geometries. Other technical issues and mechanistic characteristics of IAD are also discussed, in particular with respect to the effects of Arg233.

  15. Landmark-based elastic registration using approximating thin-plate splines.

    PubMed

    Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H

    2001-06-01

    We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

  16. Limited sinus tarsi approach for intra-articular calcaneus fractures.

    PubMed

    Kikuchi, Christian; Charlton, Timothy P; Thordarson, David B

    2013-12-01

    Operative treatment of calcaneal fractures has a historically high rate of wound complications, so the most optimal operative approach has been a topic of investigation. This study reviews the radiographic and clinical outcomes of the use of the sinus tarsi approach for operative fixation of these fractures with attention to the rate of infection and restoration of angular measurements. The radiographs and charts of 20 patients with 22 calcaneal fractures were reviewed to assess for restoration of angular and linear dimensions of the calcaneus as well as time to radiographic union. Secondary outcome measures included the rate of postoperative infection, osteomyelitis, revision surgeries, and nonunion. We found a statistically significant restoration of Böhler's angle and calcaneal width. Three of the 22 cases had a superficial wound infection. One patient had revision surgery for symptomatic hardware removal. There were no events of osteomyelitis, deep infection, malunion, or nonunion. We found that the sinus tarsi approach yielded similar outcomes to those reported in the literature. Level IV, retrospective case series.

  17. Multi-parameter optimization of piezoelectric actuators for multi-mode active vibration control of cylindrical shells

    NASA Astrophysics Data System (ADS)

    Hu, K. M.; Li, Hua

    2018-07-01

    A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.

  18. Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach.

    PubMed

    Bennett, Casey C; Hauser, Kris

    2013-01-01

    In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments. Future work is described that outlines potential lines of research and integration of machine learning algorithms for personalized medicine. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Capacity improvement using simulation optimization approaches: A case study in the thermotechnology industry

    NASA Astrophysics Data System (ADS)

    Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz

    2015-02-01

    In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.

  20. The Role of Nuclear Power in Reducing Risk of the Fossil Fuel Prices and Diversity of Electricity Generation in Tunisia: A Portfolio Approach

    NASA Astrophysics Data System (ADS)

    Abdelhamid, Mohamed Ben; Aloui, Chaker; Chaton, Corinne; Souissi, Jomâa

    2010-04-01

    This paper applies real options and mean-variance portfolio theories to analyze the electricity generation planning into presence of nuclear power plant for the Tunisian case. First, we analyze the choice between fossil fuel and nuclear production. A dynamic model is presented to illustrate the impact of fossil fuel cost uncertainty on the optimal timing to switch from gas to nuclear. Next, we use the portfolio theory to manage risk of the electricity generation portfolio and to determine the optimal fuel mix with the nuclear alternative. Based on portfolio theory, the results show that there is other optimal mix than the mix fixed for the Tunisian mix for the horizon 2010-2020, with lower cost for the same risk degree. In the presence of nuclear technology, we found that the optimal generating portfolio must include 13% of nuclear power technology share.

  1. The optimization of total laboratory automation by simulation of a pull-strategy.

    PubMed

    Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo

    2015-01-01

    Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.

  2. Dynamic programming methods for concurrent design and dynamic allocation of vehicles embedded in a system-of-systems

    NASA Astrophysics Data System (ADS)

    Nusawardhana

    2007-12-01

    Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.

  3. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  4. Cooperative inversion of magnetotelluric and seismic data sets

    NASA Astrophysics Data System (ADS)

    Markovic, M.; Santos, F.

    2012-04-01

    Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal

  5. Optimization of Micro Metal Injection Molding By Using Grey Relational Grade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, M. H. I.; Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia; Muhamad, N.

    2011-01-17

    Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising method towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is applied to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on optimization of process parameter where Taguchi method associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the optimization conversionmore » from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.« less

  6. A flowsheet model of a well-mixed fluidized bed dryer: Applications in controllability assessment and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langrish, T.A.G.; Harvey, A.C.

    2000-01-01

    A model of a well-mixed fluidized-bed dryer within a process flowsheeting package (SPEEDUP{trademark}) has been developed and applied to a parameter sensitivity study, a steady-state controllability analysis and an optimization study. This approach is more general and would be more easily applied to a complex flowsheet than one which relied on stand-alone dryer modeling packages. The simulation has shown that industrial data may be fitted to the model outputs with sensible values of unknown parameters. For this case study, the parameter sensitivity study has found that the heat loss from the dryer and the critical moisture content of the materialmore » have the greatest impact on the dryer operation at the current operating point. An optimization study has demonstrated the dominant effect of the heat loss from the dryer on the current operating cost and the current operating conditions, and substantial cost savings (around 50%) could be achieved with a well-insulated and airtight dryer, for the specific case studied here.« less

  7. Predicting the optimal process window for the coating of single-crystalline organic films with mobilities exceeding 7 cm2/Vs.

    NASA Astrophysics Data System (ADS)

    Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric

    2016-09-01

    Organic thin film transistors (OTFTs) based on single crystalline thin films of organic semiconductors have seen considerable development in the recent years. The most successful method for the fabrication of single crystalline films are solution-based meniscus guided coating techniques such as dip-coating, solution shearing or zone casting. These upscalable methods enable rapid and efficient film formation without additional processing steps. The single-crystalline film quality is strongly dependent on solvent choice, substrate temperature and coating speed. So far, however, process optimization has been conducted by trial and error methods, involving, for example, the variation of coating speeds over several orders of magnitude. Through a systematic study of solvent phase change dynamics in the meniscus region, we develop a theoretical framework that links the optimal coating speed to the solvent choice and the substrate temperature. In this way, we can accurately predict an optimal processing window, enabling fast process optimization. Our approach is verified through systematic OTFT fabrication based on films grown with different semiconductors, solvents and substrate temperatures. The use of best predicted coating speeds delivers state of the art devices. In the case of C8BTBT, OTFTs show well-behaved characteristics with mobilities up to 7 cm2/Vs and onset voltages close to 0 V. Our approach also explains well optimal recipes published in the literature. This route considerably accelerates parameter screening for all meniscus guided coating techniques and unveils the physics of single crystalline film formation.

  8. Bi-directional evolutionary structural optimization for strut-and-tie modelling of three-dimensional structural concrete

    NASA Astrophysics Data System (ADS)

    Shobeiri, Vahid; Ahmadi-Nedushan, Behrouz

    2017-12-01

    This article presents a method for the automatic generation of optimal strut-and-tie models in reinforced concrete structures using a bi-directional evolutionary structural optimization method. The methodology presented is developed for compliance minimization relying on the Abaqus finite element software package. The proposed approach deals with the generation of truss-like designs in a three-dimensional environment, addressing the design of corbels and joints as well as bridge piers and pile caps. Several three-dimensional examples are provided to show the capabilities of the proposed framework in finding optimal strut-and-tie models in reinforced concrete structures and verifying its efficiency to cope with torsional actions. Several issues relating to the use of the topology optimization for strut-and-tie modelling of structural concrete, such as chequerboard patterns, mesh-dependency and multiple load cases, are studied. In the last example, a design procedure for detailing and dimensioning of the strut-and-tie models is given according to the American Concrete Institute (ACI) 318-08 provisions.

  9. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  10. Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige

    Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.

  11. Modern concepts in facial nerve reconstruction

    PubMed Central

    2010-01-01

    Background Reconstructive surgery of the facial nerve is not daily routine for most head and neck surgeons. The published experience on strategies to ensure optimal functional results for the patients are based on small case series with a large variety of surgical techniques. On this background it is worthwhile to develop a standardized approach for diagnosis and treatment of patients asking for facial rehabilitation. Conclusion A standardized approach is feasible: Patients with chronic facial palsy first need an exact classification of the palsy's aetiology. A step-by-step clinical examination, if necessary MRI imaging and electromyographic examination allow a classification of the palsy's aetiology as well as the determination of the severity of the palsy and the functional deficits. Considering the patient's desire, age and life expectancy, an individual surgical concept is applicable using three main approaches: a) early extratemporal reconstruction, b) early reconstruction of proximal lesions if extratemporal reconstruction is not possible, c) late reconstruction or in cases of congenital palsy. Twelve to 24 months after the last step of surgical reconstruction a standardized evaluation of the therapeutic results is recommended to evaluate the necessity for adjuvant surgical procedures or other adjuvant procedures, e.g. botulinum toxin application. Up to now controlled trials on the value of physiotherapy and other adjuvant measures are missing to give recommendation for optimal application of adjuvant therapies. PMID:21040532

  12. Augmented design and analysis of computer experiments: a novel tolerance embedded global optimization approach applied to SWIR hyperspectral illumination design.

    PubMed

    Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter

    2016-12-26

    A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.

  13. Convolutional neural network approach for enhanced capture of breast parenchymal complexity patterns associated with breast cancer risk

    NASA Astrophysics Data System (ADS)

    Oustimov, Andrew; Gastounioti, Aimilia; Hsieh, Meng-Kang; Pantalone, Lauren; Conant, Emily F.; Kontos, Despina

    2017-03-01

    We assess the feasibility of a parenchymal texture feature fusion approach, utilizing a convolutional neural network (ConvNet) architecture, to benefit breast cancer risk assessment. Hypothesizing that by capturing sparse, subtle interactions between localized motifs present in two-dimensional texture feature maps derived from mammographic images, a multitude of texture feature descriptors can be optimally reduced to five meta-features capable of serving as a basis on which a linear classifier, such as logistic regression, can efficiently assess breast cancer risk. We combine this methodology with our previously validated lattice-based strategy for parenchymal texture analysis and we evaluate the feasibility of this approach in a case-control study with 424 digital mammograms. In a randomized split-sample setting, we optimize our framework in training/validation sets (N=300) and evaluate its descriminatory performance in an independent test set (N=124). The discriminatory capacity is assessed in terms of the the area under the curve (AUC) of the receiver operator characteristic (ROC). The resulting meta-features exhibited strong classification capability in the test dataset (AUC = 0.90), outperforming conventional, non-fused, texture analysis which previously resulted in an AUC=0.85 on the same case-control dataset. Our results suggest that informative interactions between localized motifs exist and can be extracted and summarized via a fairly simple ConvNet architecture.

  14. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Treesearch

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  15. Low altitude remote sensing technologies for crop stress monitoring: a case study on spatial and temporal monitoring of irrigated pinto bean

    USDA-ARS?s Scientific Manuscript database

    Site-specific crop management is a promising approach to maximize crop yield with optimal use of rapidly depleting natural resources. Availability of high resolution crop data at critical growth stages is a key for real-time data-driven decisions during the production season. The goal of this study ...

  16. Is That What Bayesians Believe? Reply to Griffiths, Chater, Norris, and Pouget (2012)

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.

    2012-01-01

    Griffiths, Chater, Norris, and Pouget (2012) argue that we have misunderstood the Bayesian approach. In their view, it is rarely the case that researchers are making claims that performance in a given task is near optimal, and few, if any, researchers adopt the theoretical Bayesian perspective according to which the mind or brain is actually…

  17. Dose-mass inverse optimization for minimally moving thoracic lesions

    NASA Astrophysics Data System (ADS)

    Mihaylov, I. B.; Moros, E. G.

    2015-05-01

    In the past decade, several different radiotherapy treatment plan evaluation and optimization schemes have been proposed as viable approaches, aiming for dose escalation or an increase of healthy tissue sparing. In particular, it has been argued that dose-mass plan evaluation and treatment plan optimization might be viable alternatives to the standard of care, which is realized through dose-volume evaluation and optimization. The purpose of this investigation is to apply dose-mass optimization to a cohort of lung cancer patients and compare the achievable healthy tissue sparing to that one achievable through dose-volume optimization. Fourteen non-small cell lung cancer (NSCLC) patient plans were studied retrospectively. The range of tumor motion was less than 0.5 cm and motion management in the treatment planning process was not considered. For each case, dose-volume (DV)-based and dose-mass (DM)-based optimization was performed. Nine-field step-and-shoot IMRT was used, with all of the optimization parameters kept the same between DV and DM optimizations. Commonly used dosimetric indices (DIs) such as dose to 1% the spinal cord volume, dose to 50% of the esophageal volume, and doses to 20 and 30% of healthy lung volumes were used for cross-comparison. Similarly, mass-based indices (MIs), such as doses to 20 and 30% of healthy lung masses, 1% of spinal cord mass, and 33% of heart mass, were also tallied. Statistical equivalence tests were performed to quantify the findings for the entire patient cohort. Both DV and DM plans for each case were normalized such that 95% of the planning target volume received the prescribed dose. DM optimization resulted in more organs at risk (OAR) sparing than DV optimization. The average sparing of cord, heart, and esophagus was 23, 4, and 6%, respectively. For the majority of the DIs, DM optimization resulted in lower lung doses. On average, the doses to 20 and 30% of healthy lung were lower by approximately 3 and 4%, whereas lung volumes receiving 2000 and 3000 cGy were lower by 3 and 2%, respectively. The behavior of MIs was very similar. The statistical analyses of the results again indicated better healthy anatomical structure sparing with DM optimization. The presented findings indicate that dose-mass-based optimization results in statistically significant OAR sparing as compared to dose-volume-based optimization for NSCLC. However, the sparing is case-dependent and it is not observed for all tallied dosimetric endpoints.

  18. LMD Based Features for the Automatic Seizure Detection of EEG Signals Using SVM.

    PubMed

    Zhang, Tao; Chen, Wanzhong

    2017-08-01

    Achieving the goal of detecting seizure activity automatically using electroencephalogram (EEG) signals is of great importance and significance for the treatment of epileptic seizures. To realize this aim, a newly-developed time-frequency analytical algorithm, namely local mean decomposition (LMD), is employed in the presented study. LMD is able to decompose an arbitrary signal into a series of product functions (PFs). Primarily, the raw EEG signal is decomposed into several PFs, and then the temporal statistical and non-linear features of the first five PFs are calculated. The features of each PF are fed into five classifiers, including back propagation neural network (BPNN), K-nearest neighbor (KNN), linear discriminant analysis (LDA), un-optimized support vector machine (SVM) and SVM optimized by genetic algorithm (GA-SVM), for five classification cases, respectively. Confluent features of all PFs and raw EEG are further passed into the high-performance GA-SVM for the same classification tasks. Experimental results on the international public Bonn epilepsy EEG dataset show that the average classification accuracy of the presented approach are equal to or higher than 98.10% in all the five cases, and this indicates the effectiveness of the proposed approach for automated seizure detection.

  19. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  20. Optimal decentralized feedback control for a truss structure

    NASA Technical Reports Server (NTRS)

    Cagle, A.; Ozguner, U.

    1989-01-01

    One approach to the decentralized control of large flexible space structures involves the design of controllers for the substructures of large systems and their subsequent application to the entire coupled system. This approach is presently developed for the case of active vibration damping on an experimental large struss structure. The isolated boundary loading method is used to define component models by FEM; component controllers are designed using an interlocking control concept which minimizes the motion of the boundary nodes, thereby reducing the exchange of mechanical disturbances among components.

  1. Periodic Application of Stochastic Cost Optimization Methodology to Achieve Remediation Objectives with Minimized Life Cycle Cost

    NASA Astrophysics Data System (ADS)

    Kim, U.; Parker, J.

    2016-12-01

    Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.

  2. Optimization of LC-Orbitrap-HRMS acquisition and MZmine 2 data processing for nontarget screening of environmental samples using design of experiments.

    PubMed

    Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias

    2016-11-01

    Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.

  3. Generalization of the optimized-effective-potential model to include electron correlation: A variational derivation of the Sham-Schlueter equation for the exact exchange-correlation potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casida, M.E.

    1995-03-01

    The now classic optimized-effective-potential (OEP) approach of Sharp and Horton [Phys Rev. 90, 317 (1953)] and Talman and Shadwick [Phys. Rev. A 14, 36 (1976)] seeks the local potential that is variationally optimized to best approximate the Hartree-Fock exchange operator. The resulting OEP can be identified as the exchange potential of Kohn-Sham density-functional theory. The present work generalizes this OEP approach to treat the correlated case, and shows that the Kohn-Sham exchange-correlation potential is the variationally best local approximation to the exchange-correlation self-energy. This provides a variational derivation of the equation for the exact exchange-correlation potential that was derived bymore » Sham and Schlueter using a density condition. Implications for an approximate physical interpretation of the Kohn-Sham orbitals are discussesd. A correlated generalization of the Sharp-Horton--Krieger-Li-Iafrate [Phys Lett. A 146, 256 (1990)] approximation of the exchange potential is introduced in the quasiparticle limit.« less

  4. Ridge filter design and optimization for the broad-beam three-dimensional irradiation system for heavy-ion radiotherapy.

    PubMed

    Schaffner, B; Kanai, T; Futami, Y; Shimbo, M; Urakabe, E

    2000-04-01

    The broad-beam three-dimensional irradiation system under development at National Institute of Radiological Sciences (NIRS) requires a small ridge filter to spread the initially monoenergetic heavy-ion beam to a small spread-out Bragg peak (SOBP). A large SOBP covering the target volume is then achieved by a superposition of differently weighted and displaced small SOBPs. Two approaches were studied for the definition of a suitable ridge filter and experimental verifications were performed. Both approaches show a good agreement between the calculated and measured dose and lead to a good homogeneity of the biological dose in the target. However, the ridge filter design that produces a Gaussian-shaped spectrum of the particle ranges was found to be more robust to small errors and uncertainties in the beam application. Furthermore, an optimization procedure for two fields was applied to compensate for the missing dose from the fragmentation tail for the case of a simple-geometry target. The optimized biological dose distributions show that a very good homogeneity is achievable in the target.

  5. Fairness in optimizing bus-crew scheduling process.

    PubMed

    Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei

    2017-01-01

    This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.

  6. Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Lawson, J.

    2003-01-01

    Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.

  7. Evaluation of load flow and grid expansion in a unit-commitment and expansion optimization model SciGRID International Conference on Power Grid Modelling

    NASA Astrophysics Data System (ADS)

    Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven

    2018-02-01

    Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.

  8. Improving the Flexibility of Optimization-Based Decision Aiding Frameworks for Integrated Water Resource Management

    NASA Astrophysics Data System (ADS)

    Guillaume, J. H.; Kasprzyk, J. R.

    2013-12-01

    Deep uncertainty refers to situations in which stakeholders cannot agree on the full suite of risks for their system or their probabilities. Additionally, systems are often managed for multiple, conflicting objectives such as minimizing cost, maximizing environmental quality, and maximizing hydropower revenues. Many objective analysis (MOA) uses a quantitative model combined with evolutionary optimization to provide a tradeoff set of potential solutions to a planning problem. However, MOA is often performed using a single, fixed problem conceptualization. Focus on development of a single formulation can introduce an "inertia" into the problem solution, such that issues outside the initial formulation are less likely to ever be addressed. This study uses the Iterative Closed Question Methodology (ICQM) to continuously reframe the optimization problem, providing iterative definition and reflection for stakeholders. By using a series of directed questions to look beyond a problem's existing modeling representation, ICQM seeks to provide a working environment within which it is easy to modify the motivating question, assumptions, and model identification in optimization problems. The new approach helps identify and reduce bottle-necks introduced by properties of both the simulation model and optimization approach that reduce flexibility in generation and evaluation of alternatives. It can therefore help introduce new perspectives on the resolution of conflicts between objectives. The Lower Rio Grande Valley portfolio planning problem is used as a case study.

  9. Policy Tree Optimization for Adaptive Management of Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Giuliani, M.

    2016-12-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points", which are threshold values of indicator variables that signal a change in policy. However, there remains a need for a general method to optimize the choice of indicators and their threshold values in a way that is easily interpretable for decision makers. Here we propose a conceptual framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. We demonstrate the approach using Folsom Reservoir, California as a case study, in which operating policies must balance the risk of both floods and droughts. Given a set of feature variables, such as reservoir level, inflow observations and forecasts, and time of year, the resulting policy defines the conditions under which flood control and water supply hedging operations should be triggered. Importantly, the tree-based rule sets are easy to interpret for decision making, and can be compared to historical operating policies to understand the adaptations needed under possible climate change scenarios. Several remaining challenges are discussed, including the empirical convergence properties of the method, and extensions to irreversible decisions such as infrastructure. Policy tree optimization, and corresponding open-source software, provide a generalizable, interpretable approach to designing adaptive policies under uncertainty for water resources systems.

  10. On-the-fly data assessment for high-throughput x-ray diffraction measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Fang; Pandolfi, Ronald; Van Campen, Douglas

    Investment in brighter sources and larger and faster detectors has accelerated the speed of data acquisition at national user facilities. The accelerated data acquisition offers many opportunities for the discovery of new materials, but it also presents a daunting challenge. The rate of data acquisition far exceeds the current speed of data quality assessment, resulting in less than optimal data and data coverage, which in extreme cases forces recollection of data. Herein, we show how this challenge can be addressed through the development of an approach that makes routine data assessment automatic and instantaneous. By extracting and visualizing customized attributesmore » in real time, data quality and coverage, as well as other scientifically relevant information contained in large data sets, is highlighted. Deployment of such an approach not only improves the quality of data but also helps optimize the usage of expensive characterization resources by prioritizing measurements of the highest scientific impact. We anticipate our approach will become a starting point for a sophisticated decision-tree that optimizes data quality and maximizes scientific content in real time through automation. Finally, with these efforts to integrate more automation in data collection and analysis, we can truly take advantage of the accelerating speed of data acquisition.« less

  11. Superlattice design for optimal thermoelectric generator performance

    NASA Astrophysics Data System (ADS)

    Priyadarshi, Pankaj; Sharma, Abhishek; Mukherjee, Swarnadip; Muralidharan, Bhaskaran

    2018-05-01

    We consider the design of an optimal superlattice thermoelectric generator via the energy bandpass filter approach. Various configurations of superlattice structures are explored to obtain a bandpass transmission spectrum that approaches the ideal ‘boxcar’ form, which is now well known to manifest the largest efficiency at a given output power in the ballistic limit. Using the coherent non-equilibrium Green’s function formalism coupled self-consistently with the Poisson’s equation, we identify such an ideal structure and also demonstrate that it is almost immune to the deleterious effect of self-consistent charging and device variability. Analyzing various superlattice designs, we conclude that superlattice with a Gaussian distribution of the barrier thickness offers the best thermoelectric efficiency at maximum power. It is observed that the best operating regime of this device design provides a maximum power in the range of 0.32–0.46 MW/m 2 at efficiencies between 54%–43% of Carnot efficiency. We also analyze our device designs with the conventional figure of merit approach to counter support the results so obtained. We note a high zT el   =  6 value in the case of Gaussian distribution of the barrier thickness. With the existing advanced thin-film growth technology, the suggested superlattice structures can be achieved, and such optimized thermoelectric performances can be realized.

  12. On-the-fly data assessment for high-throughput x-ray diffraction measurements

    DOE PAGES

    Ren, Fang; Pandolfi, Ronald; Van Campen, Douglas; ...

    2017-05-02

    Investment in brighter sources and larger and faster detectors has accelerated the speed of data acquisition at national user facilities. The accelerated data acquisition offers many opportunities for the discovery of new materials, but it also presents a daunting challenge. The rate of data acquisition far exceeds the current speed of data quality assessment, resulting in less than optimal data and data coverage, which in extreme cases forces recollection of data. Herein, we show how this challenge can be addressed through the development of an approach that makes routine data assessment automatic and instantaneous. By extracting and visualizing customized attributesmore » in real time, data quality and coverage, as well as other scientifically relevant information contained in large data sets, is highlighted. Deployment of such an approach not only improves the quality of data but also helps optimize the usage of expensive characterization resources by prioritizing measurements of the highest scientific impact. We anticipate our approach will become a starting point for a sophisticated decision-tree that optimizes data quality and maximizes scientific content in real time through automation. Finally, with these efforts to integrate more automation in data collection and analysis, we can truly take advantage of the accelerating speed of data acquisition.« less

  13. Illumination system development using design and analysis of computer experiments

    NASA Astrophysics Data System (ADS)

    Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter

    2015-09-01

    Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.

  14. A reflective qualitative appreciative inquiry approach to restoring compassionate care deficits at one United Kingdom health care site.

    PubMed

    McSherry, Robert; Timmins, Fiona; de Vries, Jan M A; McSherry, Wilfred

    2018-06-22

    Following declining health care practices at one UK health care site the subsequent and much publicized Francis Report made several far-reaching recommendations aimed at recovering optimal levels of care including stringent monitoring of practice. The aftermath of these deliberations have had resounding consequences for quality care both nationally and internationally. A reflective qualitative appreciative qualitative inquiry using a hybrid approach combining case study and thematic analysis outlines the development and analysis of a solution-focused intervention aimed at restoring staff confidence and optimal care levels at one key UK hospital site. Personal diaries were used to collect data. Data were analysed using descriptive thematic analysis. The implications of the five emerging themes and the 10-step approach used are discussed in the context of understanding care erosion and ways to effect organisational change. A novel approach to addressing care deficits, which provides a promising bottom-up approach, initiated by health care policy makers is suggested for use in other health care settings when concerns about care arise. It is anticipated this approach will prove useful for nurse managers, particularly in relation to finding positive solutions to addressing problems that surround potential failing standards of care in hospitals. © 2018 John Wiley & Sons Ltd.

  15. Pathway-based predictive approaches for non-animal assessment of acute inhalation toxicity.

    PubMed

    Clippinger, Amy J; Allen, David; Behrsing, Holger; BéruBé, Kelly A; Bolger, Michael B; Casey, Warren; DeLorme, Michael; Gaça, Marianna; Gehen, Sean C; Glover, Kyle; Hayden, Patrick; Hinderliter, Paul; Hotchkiss, Jon A; Iskandar, Anita; Keyser, Brian; Luettich, Karsta; Ma-Hock, Lan; Maione, Anna G; Makena, Patrudu; Melbourne, Jodie; Milchak, Lawrence; Ng, Sheung P; Paini, Alicia; Page, Kathryn; Patlewicz, Grace; Prieto, Pilar; Raabe, Hans; Reinke, Emily N; Roper, Clive; Rose, Jane; Sharma, Monita; Spoo, Wayne; Thorne, Peter S; Wilson, Daniel M; Jarabek, Annie M

    2018-06-20

    New approaches are needed to assess the effects of inhaled substances on human health. These approaches will be based on mechanisms of toxicity, an understanding of dosimetry, and the use of in silico modeling and in vitro test methods. In order to accelerate wider implementation of such approaches, development of adverse outcome pathways (AOPs) can help identify and address gaps in our understanding of relevant parameters for model input and mechanisms, and optimize non-animal approaches that can be used to investigate key events of toxicity. This paper describes the AOPs and the toolbox of in vitro and in silico models that can be used to assess the key events leading to toxicity following inhalation exposure. Because the optimal testing strategy will vary depending on the substance of interest, here we present a decision tree approach to identify an appropriate non-animal integrated testing strategy that incorporates consideration of a substance's physicochemical properties, relevant mechanisms of toxicity, and available in silico models and in vitro test methods. This decision tree can facilitate standardization of the testing approaches. Case study examples are presented to provide a basis for proof-of-concept testing to illustrate the utility of non-animal approaches to inform hazard identification and risk assessment of humans exposed to inhaled substances. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Therapists' perspectives on optimal treatment for pathological narcissism.

    PubMed

    Kealy, David; Goodman, Geoff; Rasmussen, Brian; Weideman, Rene; Ogrodniczuk, John S

    2017-01-01

    This study used Q methodology to explore clinicians' perspectives regarding optimal psychotherapy process in the treatment of pathological narcissism, a syndrome of impaired self-regulation. Participants were 34 psychotherapists of various disciplines and theoretical orientations who reviewed 3 clinical vignettes portraying hypothetical cases of grandiose narcissism, vulnerable narcissism, and panic disorder without pathological narcissism. Participants then used the Psychotherapy Process Q set, a 100-item Q-sort instrument, to indicate their views regarding optimal therapy process for each hypothetical case. By-person principal components analysis with varimax rotation was conducted on all 102 Q-sorts, revealing 4 components representing clinicians' perspectives on ideal therapy processes for narcissistic and non-narcissistic patients. These perspectives were then analyzed regarding their relationship to established therapy models. The first component represented an introspective, relationally oriented therapy process and was strongly correlated with established psychodynamic treatments. The second component, most frequently endorsed for the panic disorder vignette, consisted of a cognitive and alliance-building approach that correlated strongly with expert-rated cognitive-behavioral therapy. The third and fourth components involved therapy processes focused on the challenging interpersonal behaviors associated with narcissistic vulnerability and grandiosity, respectively. The perspectives on therapy processes that emerged in this study reflect different points of emphasis in the treatment of pathological narcissism, and may serve as prototypes of therapist-generated approaches to patients suffering from this issue. The findings suggest several areas for further empirical inquiry regarding psychotherapy with this population. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  18. On meeting capital requirements with a chance-constrained optimization model.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  19. Optimal mix of renewable power generation in the MENA region as a basis for an efficient electricity supply to europe

    NASA Astrophysics Data System (ADS)

    Alhamwi, Alaa; Kleinhans, David; Weitemeyer, Stefan; Vogt, Thomas

    2014-12-01

    Renewable Energy sources are gaining importance in the Middle East and North Africa (MENA) region. The purpose of this study is to quantify the optimal mix of renewable power generation in the MENA region, taking Morocco as a case study. Based on hourly meteorological data and load data, a 100% solar-plus-wind only scenario for Morocco is investigated. For the optimal mix analyses, a mismatch energy modelling approach is adopted with the objective to minimise the required storage capacities. For a hypothetical Moroccan energy supply system which is entirely based on renewable energy sources, our results show that the minimum storage capacity is achieved at a share of 63% solar and 37% wind power generations.

  20. Assessment, design and control strategy development of a fuel cell hybrid electric vehicle for CSU's EcoCAR

    NASA Astrophysics Data System (ADS)

    Fox, Matthew D.

    Advanced automotive technology assessment and powertrain design are increasingly performed through modeling, simulation, and optimization. But technology assessments usually target many competing criteria making any individual optimization challenging and arbitrary. Further, independent design simulations and optimizations take considerable time to execute, and design constraints and objectives change throughout the design process. Changes in design considerations usually require re-processing of simulations and more time. In this thesis, these challenges are confronted through CSU's participation in the EcoCAR2 hybrid vehicle design competition. The complexity of the competition's design objectives leveraged development of a decision support system tool to aid in multi-criteria decision making across technologies and to perform powertrain optimization. To make the decision support system interactive, and bypass the problem of long simulation times, a new approach was taken. The result of this research is CSU's architecture selection and component sizing, which optimizes a composite objective function representing the competition score. The selected architecture is an electric vehicle with an onboard range extending hydrogen fuel cell system. The vehicle has a 145kW traction motor, 18.9kWh of lithium ion battery, a 15kW fuel cell system, and 5kg of hydrogen storage capacity. Finally, a control strategy was developed that improves the vehicles performance throughout the driving range under variable driving conditions. In conclusion, the design process used in this research is reviewed and evaluated against other common design methodologies. I conclude, through the highlighted case studies, that the approach is more comprehensive than other popular design methodologies and is likely to lead to a higher quality product. The upfront modeling work and decision support system formulation will pay off in superior and timely knowledge transfer and more informed design decisions. The hypothesis is supported by the three case studies examined in this thesis.

  1. A divide-and-conquer approach to determine the Pareto frontier for optimization of protein engineering experiments.

    PubMed

    He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris

    2012-03-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability versus novelty, affinity versus specificity, activity versus immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not "dominated"; that is, no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, Protein Engineering Pareto FRontier (PEPFR), that hierarchically subdivides the objective space, using appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. Copyright © 2011 Wiley Periodicals, Inc.

  2. Dynamic optimization of metabolic networks coupled with gene expression.

    PubMed

    Waldherr, Steffen; Oyarzún, Diego A; Bockmayr, Alexander

    2015-01-21

    The regulation of metabolic activity by tuning enzyme expression levels is crucial to sustain cellular growth in changing environments. Metabolic networks are often studied at steady state using constraint-based models and optimization techniques. However, metabolic adaptations driven by changes in gene expression cannot be analyzed by steady state models, as these do not account for temporal changes in biomass composition. Here we present a dynamic optimization framework that integrates the metabolic network with the dynamics of biomass production and composition. An approximation by a timescale separation leads to a coupled model of quasi-steady state constraints on the metabolic reactions, and differential equations for the substrate concentrations and biomass composition. We propose a dynamic optimization approach to determine reaction fluxes for this model, explicitly taking into account enzyme production costs and enzymatic capacity. In contrast to the established dynamic flux balance analysis, our approach allows predicting dynamic changes in both the metabolic fluxes and the biomass composition during metabolic adaptations. Discretization of the optimization problems leads to a linear program that can be efficiently solved. We applied our algorithm in two case studies: a minimal nutrient uptake network, and an abstraction of core metabolic processes in bacteria. In the minimal model, we show that the optimized uptake rates reproduce the empirical Monod growth for bacterial cultures. For the network of core metabolic processes, the dynamic optimization algorithm predicted commonly observed metabolic adaptations, such as a diauxic switch with a preference ranking for different nutrients, re-utilization of waste products after depletion of the original substrate, and metabolic adaptation to an impending nutrient depletion. These examples illustrate how dynamic adaptations of enzyme expression can be predicted solely from an optimization principle. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Comparative analysis of Pareto surfaces in multi-criteria IMRT planning

    NASA Astrophysics Data System (ADS)

    Teichert, K.; Süss, P.; Serna, J. I.; Monz, M.; Küfer, K. H.; Thieke, C.

    2011-06-01

    In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.

  4. Management of envenomations during pregnancy.

    PubMed

    Brown, S A; Seifert, S A; Rayburn, W F

    2013-01-01

    Envenomations during pregnancy pose all the problems of envenomation in the nonpregnant state with additional complexity related to maternal physiologic changes, medication use during pregnancy, and the well-being of the fetus. We review the obstetric literature and management options available to prevent maternal morbidity and mortality while limiting adverse obstetric outcomes after envenomation in pregnancy. In January 2012, we searched the U.S. National Library of Medicine Medline/PubMed, Toxline, Reprotox, Google Scholar and Micromedex databases, core surgery and internal medicine textbooks, and references of retrieved articles for the years 1966 through 2011. Search terms included "envenomation in pregnancy," "stings in pregnancy," "antivenom use in pregnancy," "anaphylaxis in pregnancy," and variants of these with known venomous animals. Reference lists generated further case reports and articles. We included English language articles and abstracts. Levels of Evidence (LOE) for the reports cited and Grades of Recommendations (GOR) based on LOE for our recommendations use the National Guidelines Clearinghouse metric of the US DHHS. Recommendations for the management of envenomation in pregnancy are guided primarily by studies on nonpregnant persons and case reports of pregnancy. Clinically significant envenomations in pregnancy are reported for snakes, spiders, scorpions, jellyfish, and hymenoptera (bees, wasps, hornets, and ants). Adverse obstetric outcomes including miscarriage, preterm birth, placental abruption, and stillbirth are associated with envenomation in pregnancy. The limited available literature suggests that adverse outcomes are primarily related to venom effects on the mother. Optimization of maternal health such as management of anaphylaxis and antivenom administration is likely the best approach to improve fetal outcomes despite potential risks to the fetus of medication administration during pregnancy. Obstetric evaluation and fetal monitoring are imperative in cases of severe envenomation. The medical literature regarding envenomation in pregnancy includes primarily retrospective reviews and case series. The limited available evidence suggests that optimal management includes a venom-specific approach, including supportive care, antivenom administration in appropriate cases, treatment of anaphylaxis if present, and fetal assessment. The current available evidence suggests that antivenom use is safe in pregnancy and that what is good for the mother is good for the fetus. Further research is needed to clarify the optimal management schema for envenomation in pregnancy.

  5. Numerical approach of collision avoidance and optimal control on robotic manipulators

    NASA Technical Reports Server (NTRS)

    Wang, Jyhshing Jack

    1990-01-01

    Collision-free optimal motion and trajectory planning for robotic manipulators are solved by a method of sequential gradient restoration algorithm. Numerical examples of a two degree-of-freedom (DOF) robotic manipulator are demonstrated to show the excellence of the optimization technique and obstacle avoidance scheme. The obstacle is put on the midway, or even further inward on purpose, of the previous no-obstacle optimal trajectory. For the minimum-time purpose, the trajectory grazes by the obstacle and the minimum-time motion successfully avoids the obstacle. The minimum-time is longer for the obstacle avoidance cases than the one without obstacle. The obstacle avoidance scheme can deal with multiple obstacles in any ellipsoid forms by using artificial potential fields as penalty functions via distance functions. The method is promising in solving collision-free optimal control problems for robotics and can be applied to any DOF robotic manipulators with any performance indices and mobile robots as well. Since this method generates optimum solution based on Pontryagin Extremum Principle, rather than based on assumptions, the results provide a benchmark against which any optimization techniques can be measured.

  6. Molecular taxonomy of phytopathogenic fungi: a case study in Peronospora.

    PubMed

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M Teresa; Martín, María P

    2009-07-29

    Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence.

  7. Molecular Taxonomy of Phytopathogenic Fungi: A Case Study in Peronospora

    PubMed Central

    Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M. Teresa; Martín, María P.

    2009-01-01

    Background Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Methodology Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. Conclusions A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence. PMID:19641601

  8. Mammographic phenotypes of breast cancer risk driven by breast anatomy

    NASA Astrophysics Data System (ADS)

    Gastounioti, Aimilia; Oustimov, Andrew; Hsieh, Meng-Kang; Pantalone, Lauren; Conant, Emily F.; Kontos, Despina

    2017-03-01

    Image-derived features of breast parenchymal texture patterns have emerged as promising risk factors for breast cancer, paving the way towards personalized recommendations regarding women's cancer risk evaluation and screening. The main steps to extract texture features of the breast parenchyma are the selection of regions of interest (ROIs) where texture analysis is performed, the texture feature calculation and the texture feature summarization in case of multiple ROIs. In this study, we incorporate breast anatomy in these three key steps by (a) introducing breast anatomical sampling for the definition of ROIs, (b) texture feature calculation aligned with the structure of the breast and (c) weighted texture feature summarization considering the spatial position and the underlying tissue composition of each ROI. We systematically optimize this novel framework for parenchymal tissue characterization in a case-control study with digital mammograms from 424 women. We also compare the proposed approach with a conventional methodology, not considering breast anatomy, recently shown to enhance the case-control discriminatory capacity of parenchymal texture analysis. The case-control classification performance is assessed using elastic-net regression with 5-fold cross validation, where the evaluation measure is the area under the curve (AUC) of the receiver operating characteristic. Upon optimization, the proposed breast-anatomy-driven approach demonstrated a promising case-control classification performance (AUC=0.87). In the same dataset, the performance of conventional texture characterization was found to be significantly lower (AUC=0.80, DeLong's test p-value<0.05). Our results suggest that breast anatomy may further leverage the associations of parenchymal texture features with breast cancer, and may therefore be a valuable addition in pipelines aiming to elucidate quantitative mammographic phenotypes of breast cancer risk.

  9. Fine-Scale Structure Design for 3D Printing

    NASA Astrophysics Data System (ADS)

    Panetta, Francis Julian

    Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.

  10. Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy

    2001-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  11. Reconfigurable Flight Control Designs With Application to the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zhenglu

    1999-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the right body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  12. Endovascular Approach to Glomus Jugulare Tumors

    PubMed Central

    Kocur, Damian; Ślusarczyk, Wojciech; Przybyłko, Nikodem; Hofman, Mariusz; Jamróz, Tomasz; Suszyński, Krzysztof; Baron, Jan; Kwiek, Stanisław

    2017-01-01

    Summary Background Paragangliomas are benign neuroendocrine tumors derived from the glomus cells of the vegetative nervous system. Typically, they are located in the region of the jugular bulb and middle ear. The optimal management is controversial and can include surgical excision, stereotactic radiosurgery and embolization. Case Report We report the endovascular approach to three patients harboring glomus jugulare paragangliomas. In all cases incomplete occlusion of the lesions was achieved and recanalization in the follow-up period was revealed. Two patients presented no clinical improvement and the remaining one experienced a transient withdrawal of tinnitus. Conclusions It is technically difficult to achieve complete obliteration of glomus jugulare tumors with the use of embolization and the subtotal occlusion poses a high risk of revascularization and is not beneficial in terms of alleviating clinical symptoms. PMID:28685005

  13. Local phase method for designing and optimizing metasurface devices.

    PubMed

    Hsu, Liyi; Dupré, Matthieu; Ndao, Abdoulaye; Yellowhair, Julius; Kanté, Boubacar

    2017-10-16

    Metasurfaces have attracted significant attention due to their novel designs for flat optics. However, the approach usually used to engineer metasurface devices assumes that neighboring elements are identical, by extracting the phase information from simulations with periodic boundaries, or that near-field coupling between particles is negligible, by extracting the phase from single particle simulations. This is not the case most of the time and the approach thus prevents the optimization of devices that operate away from their optimum. Here, we propose a versatile numerical method to obtain the phase of each element within the metasurface (meta-atoms) while accounting for near-field coupling. Quantifying the phase error of each element of the metasurfaces with the proposed local phase method paves the way to the design of highly efficient metasurface devices including, but not limited to, deflectors, high numerical aperture metasurface concentrators, lenses, cloaks, and modulators.

  14. SLFP: a stochastic linear fractional programming approach for sustainable waste management.

    PubMed

    Zhu, H; Huang, G H

    2011-12-01

    A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Hidradenocarcinoma of the finger.

    PubMed

    Nazerali, Rahim S; Tan, Cynthia; Fung, Maxwell A; Chen, Steven L; Wong, Michael S

    2013-04-01

    Hidradenocarcinoma is a rare adnexal neoplasm representing the malignant counterpart of hidradenoma derived from eccrine sweat glands. Misdiagnosis of this disease is common due to the wide variety of histological patterns and rarity of this malignancy. We report an 87-year-old man presenting with a rare case of biopsy-proven hidradenocarcinoma of the finger. There is no standard care of treatment of hidradenocarcinoma, especially of those tumors in rare locations such as the fingers, given its rarity, variable tumor behavior and histology. Although limited treatment strategies exist, detailed data including TNM, location, histologic type and grade, and patient age should be gathered for optimal treatment strategy. The literature supports a 3-fold approach to these malignancies involving margin-free resection, sentinel lymph node biopsy to evaluate possible metastasis, and long-term follow-up given high risk of recurrence. Our treatment strategy involved a 4-fold, multidisciplinary approach involving reconstruction to optimize tumor-free remission and hand function.

  16. An analytic approach to optimize tidal turbine fields

    NASA Astrophysics Data System (ADS)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  17. Multi-Objective Hybrid Optimal Control for Interplanetary Mission Planning

    NASA Technical Reports Server (NTRS)

    Englander, Jacob; Vavrina, Matthew; Ghosh, Alexander

    2015-01-01

    Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed and in some cases the final destination. In addition, a time-history of control variables must be chosen which defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very diserable. This work presents such as an approach by posing the mission design problem as a multi-objective hybrid optimal control problem. The method is demonstrated on a hypothetical mission to the main asteroid belt.

  18. Optimization of an angle-beam ultrasonic approach for characterization of impact damage in composites

    NASA Astrophysics Data System (ADS)

    Henry, Christine; Kramb, Victoria; Welter, John T.; Wertz, John N.; Lindgren, Eric A.; Aldrin, John C.; Zainey, David

    2018-04-01

    Advances in NDE method development are greatly improved through model-guided experimentation. In the case of ultrasonic inspections, models which provide insight into complex mode conversion processes and sound propagation paths are essential for understanding the experimental data and inverting the experimental data into relevant information. However, models must also be verified using experimental data obtained under well-documented and understood conditions. Ideally, researchers would utilize the model simulations and experimental approach to efficiently converge on the optimal solution. However, variability in experimental parameters introduce extraneous signals that are difficult to differentiate from the anticipated response. This paper discusses the results of an ultrasonic experiment designed to evaluate the effect of controllable variables on the anticipated signal, and the effect of unaccounted for experimental variables on the uncertainty in those results. Controlled experimental parameters include the transducer frequency, incidence beam angle and focal depth.

  19. A new approach on the upgrade of energetic system based on green energy. A complex comparative analysis of the EEDI and EEOI

    NASA Astrophysics Data System (ADS)

    Faitar, C.; Novac, I.

    2016-08-01

    In recent years, many environmental organizations was interested to optimize the energy consumption which has become, today, one of the main concerns to the whole world. From this point of view, the maritime industry, has strove to optimize the fuel consumption of ship through the development of engines and propulsion system, improve the hull design, or using alternative energies, this way making a reduction in the amount of CO2 released to the atmosphere. The main idea of this paper is to realize a complex comparative analysis of Energy Efficiency Design Index and Energy Efficiency Operational Indicator which are calculated in two cases: first, in a classical approach for a crude oil super tanker ship and second, after the energy performance of this ship has been improved by introducing alternative energy sources on board.

  20. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  1. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  2. A Low Cost Structurally Optimized Design for Diverse Filter Types

    PubMed Central

    Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar

    2016-01-01

    A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133

  3. Choosing the best image processing method for masticatory performance assessment when using two-coloured specimens.

    PubMed

    Vaccaro, G; Pelaez, J I; Gil, J A

    2016-07-01

    Objective masticatory performance assessment using two-coloured specimens relies on image processing techniques; however, just a few approaches have been tested and no comparative studies are reported. The aim of this study was to present a selection procedure of the optimal image analysis method for masticatory performance assessment with a given two-coloured chewing gum. Dentate participants (n = 250; 25 ± 6·3 years) chewed red-white chewing gums for 3, 6, 9, 12, 15, 18, 21 and 25 cycles (2000 samples). Digitalised images of retrieved specimens were analysed using 122 image processing methods (IPMs) based on feature extraction algorithms (pixel values and histogram analysis). All IPMs were tested following the criteria of: normality of measurements (Kolmogorov-Smirnov), ability to detect differences among mixing states (anova corrected with post hoc Bonferroni) and moderate-to-high correlation with the number of cycles (Spearman's Rho). The optimal IPM was chosen using multiple criteria decision analysis (MCDA). Measurements provided by all IPMs proved to be normally distributed (P < 0·05), 116 proved sensible to mixing states (P < 0·05), and 35 showed moderate-to-high correlation with the number of cycles (|ρ| > 0·5; P < 0·05). The variance of the histogram of the Hue showed the highest correlation with the number of cycles (ρ = 0·792; P < 0·0001) and the highest MCDA score (optimal). The proposed procedure proved to be reliable and able to select the optimal approach among multiple IPMs. This experiment may be reproduced to identify the optimal approach for each case of locally available test foods. © 2016 John Wiley & Sons Ltd.

  4. An effective and comprehensive model for optimal rehabilitation of separate sanitary sewer systems.

    PubMed

    Diogo, António Freire; Barros, Luís Tiago; Santos, Joana; Temido, Jorge Santos

    2018-01-15

    In the field of rehabilitation of separate sanitary sewer systems, a large number of technical, environmental, and economic aspects are often relevant in the decision-making process, which may be modelled as a multi-objective optimization problem. Examples are those related with the operation and assessment of networks, optimization of structural, hydraulic, sanitary, and environmental performance, rehabilitation programmes, and execution works. In particular, the cost of investment, operation and maintenance needed to reduce or eliminate Infiltration from the underground water table and Inflows of storm water surface runoff (I/I) using rehabilitation techniques or related methods can be significantly lower than the cost of transporting and treating these flows throughout the lifespan of the systems or period studied. This paper presents a comprehensive I/I cost-benefit approach for rehabilitation that explicitly considers all elements of the systems and shows how the approximation is incorporated as an objective function in a general evolutionary multi-objective optimization model. It takes into account network performance and wastewater treatment costs, average values of several input variables, and rates that can reflect the adoption of different predictable or limiting scenarios. The approach can be used as a practical and fast tool to support decision-making in sewer network rehabilitation in any phase of a project. The fundamental aspects, modelling, implementation details and preliminary results of a two-objective optimization rehabilitation model using a genetic algorithm, with a second objective function related to the structural condition of the network and the service failure risk, are presented. The basic approach is applied to three real world cases studies of sanitary sewerage systems in Coimbra and the results show the simplicity, suitability, effectiveness, and usefulness of the approximation implemented and of the objective function proposed. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Use of different sample temperatures in a single extraction procedure for the screening of the aroma profile of plant matrices by headspace solid-phase microextraction.

    PubMed

    Martendal, Edmar; de Souza Silveira, Cristine Durante; Nardini, Giuliana Stael; Carasek, Eduardo

    2011-06-17

    This study proposes a new approach to the optimization of the extraction of the volatile fraction of plant matrices using the headspace solid-phase microextraction (HS-SPME) technique. The optimization focused on the extraction time and temperature using a CAR/DVB/PDMS 50/30 μm SPME fiber and 100mg of a mixture of plants as the sample in a 15-mL vial. The extraction time (10-60 min) and temperature (5-60 °C) were optimized by means of a central composite design. The chromatogram was divided into four groups of peaks based on the elution temperature to provide a better understanding of the influence of the extraction parameters on the extraction efficiency considering compounds with different volatilities/polarities. In view of the different optimum extraction time and temperature conditions obtained for each group, a new approach based on the use of two extraction temperatures in the same procedure is proposed. The optimum conditions were achieved by extracting for 30 min with a sample temperature of 60 °C followed by a further 15 min at 5 °C. The proposed method was compared with the optimized conventional method based on a single extraction temperature (45 min of extraction at 50 °C) by submitting five samples to both procedures. The proposed method led to better results in all cases, considering as the response both peak area and the number of identified peaks. The newly proposed optimization approach provided an excellent alternative procedure to extract analytes with quite different volatilities in the same procedure. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  7. A multi-objective programming model for assessment the GHG emissions in MSW management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavrotas, George, E-mail: mavrotas@chemeng.ntua.gr; Skoulaxinou, Sotiria; Gakis, Nikos

    2013-09-15

    Highlights: • The multi-objective multi-period optimization model. • The solution approach for the generation of the Pareto front with mathematical programming. • The very detailed description of the model (decision variables, parameters, equations). • The use of IPCC 2006 guidelines for landfill emissions (first order decay model) in the mathematical programming formulation. - Abstract: In this study a multi-objective mathematical programming model is developed for taking into account GHG emissions for Municipal Solid Waste (MSW) management. Mathematical programming models are often used for structure, design and operational optimization of various systems (energy, supply chain, processes, etc.). The last twenty yearsmore » they are used all the more often in Municipal Solid Waste (MSW) management in order to provide optimal solutions with the cost objective being the usual driver of the optimization. In our work we consider the GHG emissions as an additional criterion, aiming at a multi-objective approach. The Pareto front (Cost vs. GHG emissions) of the system is generated using an appropriate multi-objective method. This information is essential to the decision maker because he can explore the trade-offs in the Pareto curve and select his most preferred among the Pareto optimal solutions. In the present work a detailed multi-objective, multi-period mathematical programming model is developed in order to describe the waste management problem. Apart from the bi-objective approach, the major innovations of the model are (1) the detailed modeling considering 34 materials and 42 technologies, (2) the detailed calculation of the energy content of the various streams based on the detailed material balances, and (3) the incorporation of the IPCC guidelines for the CH{sub 4} generated in the landfills (first order decay model). The equations of the model are described in full detail. Finally, the whole approach is illustrated with a case study referring to the application of the model in a Greek region.« less

  8. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  9. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  10. Separation Potential for Multicomponent Mixtures: State-of-the Art of the Problem

    NASA Astrophysics Data System (ADS)

    Sulaberidze, G. A.; Borisevich, V. D.; Smirnov, A. Yu.

    2017-03-01

    Various approaches used in introducing a separation potential (value function) for multicomponent mixtures have been analyzed. It has been shown that all known potentials do not satisfy the Dirac-Peierls axioms for a binary mixture of uranium isotopes, which makes their practical application difficult. This is mainly due to the impossibility of constructing a "standard" cascade, whose role in the case of separation of binary mixtures is played by the ideal cascade. As a result, the only universal search method for optimal parameters of the separation cascade is their numerical optimization by the criterion of the minimum number of separation elements in it.

  11. Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction

    NASA Astrophysics Data System (ADS)

    Bui, Lam Thu; Barlow, Michael

    We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.

  12. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Kevin M.; Smith, Brennan T.; Witt, Adam M.

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  13. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  14. A Left-Sided Approach for Resection of Hepatic Caudate Lobe Hemangioma: Two Case Reports and a Literature Review.

    PubMed

    Feng, Xielin; Hu, Yong; Peng, Junping; Liu, Aixiang; Tian, Lang; Zhang, Hui

    2015-06-01

    Resection of the hemangioma located in the caudate lobe is a major challenge in current liver surgery. This study aimed to present our surgical technique for this condition. Two consecutive patients with symptomatic hepatic hemangioma undergoing caudate lobectomy were investigated retrospectively. First, all the blood inflow of hemangioma from the portal vein and the hepatic artery at the base of the umbilical fissure was dissected. After the tumors became soft and tender, the short hepatic veins and the ligaments between the secondary porta hepatis were severed. At last the tumors were resected from the right lobe of the liver. The whole process was finished by a left-sided approach. Blood lost in Case 1 was 1650 mL because of ligature failing in one short hepatic vein, and in the other case, 210 mL. Operation time was 236 minutes and 130 minutes, respectively. Postoperative hospital stays were 11 and 5 days, respectively. The diameter of tumors was 9.0 cm and 6.5 cm. Case 1 required blood transfusion during surgery. No complications such as biliary fistula, postoperative bleeding, and liver failure occurred. The left-sided approach produced the best results for caudate lobe resection in our cases. The patients who recovered are living well and asymptomatic. Caudate lobectomy can be performed safely and quickly by a left-sided approach, which is carried out with optimized perioperative management and innovative surgical technique.

  15. A Simulation-Optimization Model for the Management of Seawater Intrusion

    NASA Astrophysics Data System (ADS)

    Stanko, Z.; Nishikawa, T.

    2012-12-01

    Seawater intrusion is a common problem in coastal aquifers where excessive groundwater pumping can lead to chloride contamination of a freshwater resource. Simulation-optimization techniques have been developed to determine optimal management strategies while mitigating seawater intrusion. The simulation models are often density-independent groundwater-flow models that may assume a sharp interface and/or use equivalent freshwater heads. The optimization methods are often linear-programming (LP) based techniques that that require simplifications of the real-world system. However, seawater intrusion is a highly nonlinear, density-dependent flow and transport problem, which requires the use of nonlinear-programming (NLP) or global-optimization (GO) techniques. NLP approaches are difficult because of the need for gradient information; therefore, we have chosen a GO technique for this study. Specifically, we have coupled a multi-objective genetic algorithm (GA) with a density-dependent groundwater-flow and transport model to simulate and identify strategies that optimally manage seawater intrusion. GA is a heuristic approach, often chosen when seeking optimal solutions to highly complex and nonlinear problems where LP or NLP methods cannot be applied. The GA utilized in this study is the Epsilon-Nondominated Sorted Genetic Algorithm II (ɛ-NSGAII), which can approximate a pareto-optimal front between competing objectives. This algorithm has several key features: real and/or binary variable capabilities; an efficient sorting scheme; preservation and diversity of good solutions; dynamic population sizing; constraint handling; parallelizable implementation; and user controlled precision for each objective. The simulation model is SEAWAT, the USGS model that couples MODFLOW with MT3DMS for variable-density flow and transport. ɛ-NSGAII and SEAWAT were efficiently linked together through a C-Fortran interface. The simulation-optimization model was first tested by using a published density-independent flow model test case that was originally solved using a sequential LP method with the USGS's Ground-Water Management Process (GWM). For the problem formulation, the objective is to maximize net groundwater extraction, subject to head and head-gradient constraints. The decision variables are pumping rates at fixed wells and the system's state is represented with freshwater hydraulic head. The results of the proposed algorithm were similar to the published results (within 1%); discrepancies may be attributed to differences in the simulators and inherent differences between LP and GA. The GWM test case was then extended to a density-dependent flow and transport version. As formulated, the optimization problem is infeasible because of the density effects on hydraulic head. Therefore, the sum of the squared constraint violation (SSC) was used as a second objective. The result is a pareto curve showing optimal pumping rates versus the SSC. Analysis of this curve indicates that a similar net-extraction rate to the test case can be obtained with a minor violation in vertical head-gradient constraints. This study shows that a coupled ɛ-NSGAII/SEAWAT model can be used for the management of groundwater seawater intrusion. In the future, the proposed methodology will be applied to a real-world seawater intrusion and resource management problem for Santa Barbara, CA.

  16. Advanced EUV mask and imaging modeling

    NASA Astrophysics Data System (ADS)

    Evanschitzky, Peter; Erdmann, Andreas

    2017-10-01

    The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.

  17. Selective robust optimization: A new intensity-modulated proton therapy optimization strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yupeng; Niemela, Perttu; Siljamaki, Sami

    2015-08-15

    Purpose: To develop a new robust optimization strategy for intensity-modulated proton therapy as an important step in translating robust proton treatment planning from research to clinical applications. Methods: In selective robust optimization, a worst-case-based robust optimization algorithm is extended, and terms of the objective function are selectively computed from either the worst-case dose or the nominal dose. Two lung cancer cases and one head and neck cancer case were used to demonstrate the practical significance of the proposed robust planning strategy. The lung cancer cases had minimal tumor motion less than 5 mm, and, for the demonstration of the methodology,more » are assumed to be static. Results: Selective robust optimization achieved robust clinical target volume (CTV) coverage and at the same time increased nominal planning target volume coverage to 95.8%, compared to the 84.6% coverage achieved with CTV-based robust optimization in one of the lung cases. In the other lung case, the maximum dose in selective robust optimization was lowered from a dose of 131.3% in the CTV-based robust optimization to 113.6%. Selective robust optimization provided robust CTV coverage in the head and neck case, and at the same time improved controls over isodose distribution so that clinical requirements may be readily met. Conclusions: Selective robust optimization may provide the flexibility and capability necessary for meeting various clinical requirements in addition to achieving the required plan robustness in practical proton treatment planning settings.« less

  18. Sub-optimal control of unsteady boundary layer separation and optimal control of Saltzman-Lorenz model

    NASA Astrophysics Data System (ADS)

    Sardesai, Chetan R.

    The primary objective of this research is to explore the application of optimal control theory in nonlinear, unsteady, fluid dynamical settings. Two problems are considered: (1) control of unsteady boundary-layer separation, and (2) control of the Saltzman-Lorenz model. The unsteady boundary-layer equations are nonlinear partial differential equations that govern the eruptive events that arise when an adverse pressure gradient acts on a boundary layer at high Reynolds numbers. The Saltzman-Lorenz model consists of a coupled set of three nonlinear ordinary differential equations that govern the time-dependent coefficients in truncated Fourier expansions of Rayleigh-Renard convection and exhibit deterministic chaos. Variational methods are used to derive the nonlinear optimal control formulations based on cost functionals that define the control objective through a performance measure and a penalty function that penalizes the cost of control. The resulting formulation consists of the nonlinear state equations, which must be integrated forward in time, and the nonlinear control (adjoint) equations, which are integrated backward in time. Such coupled forward-backward time integrations are computationally demanding; therefore, the full optimal control problem for the Saltzman-Lorenz model is carried out, while the more complex unsteady boundary-layer case is solved using a sub-optimal approach. The latter is a quasi-steady technique in which the unsteady boundary-layer equations are integrated forward in time, and the steady control equation is solved at each time step. Both sub-optimal control of the unsteady boundary-layer equations and optimal control of the Saltzman-Lorenz model are found to be successful in meeting the control objectives for each problem. In the case of boundary-layer separation, the control results indicate that it is necessary to eliminate the recirculation region that is a precursor to the unsteady boundary-layer eruptions. In the case of the Saltzman-Lorenz model, it is possible to control the system about either of the two unstable equilibrium points representing clockwise and counterclockwise rotation of the convection roles in a parameter regime for which the uncontrolled solution would exhibit deterministic chaos.

  19. A simple method for EEG guided transcranial electrical stimulation without models

    NASA Astrophysics Data System (ADS)

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  20. Design optimization of a vaneless ``fish-friendly'' swirl injector for small water turbines

    NASA Astrophysics Data System (ADS)

    Airody, Ajith; Peterson, Sean D.

    2015-11-01

    Small-scale hydro-electric plants are attractive options for powering remote sites, as they draw energy from local bodies of water. However, the environmental impact on the aquatic life drawn into the water turbine is a concern. To mitigate adverse consequences on the local fauna, small-scale water turbine design efforts have focused on developing ``fish-friendly'' facilities. The components of these turbines tend to have wider passages between the blades when compared to traditional turbines, and the rotors are designed to spin at much lower angular velocities, thus allowing fish to pass through safely. Galt Green Energy has proposed a vaneless casing that provides the swirl component to the flow approaching the rotor, eliminating the need for inlet guide vanes. We numerically model the flow through the casing using ANSYS CFX to assess the evolution of the axial and circumferential velocity symmetry and uniformity in various cross-sections within and downstream of the injector. The velocity distributions, as well as the pressure loss through the injector, are functions of the pitch angle and number of revolutions of the casing. Optimization of the casing design is discussed via an objective function consisting of the velocity and pressure performance measures.

Top