Science.gov

Sample records for distributed optimization approach

  1. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    PubMed

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2017-02-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  2. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    PubMed

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  3. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  4. Mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer

    NASA Astrophysics Data System (ADS)

    Tanaka, Mirai; Yamashita, Takashi; Sano, Natsuki; Ishigaki, Aya; Suzuki, Tomomichi

    2017-01-01

    The convolution of a series of events is often observed for a variety of phenomena such as the oscillation of a string. A photochemical reaction of a molecule is characterized by a time constant, but materials in the real world contain several molecules with different time constants. Therefore, the kinetics of photochemical reactions of the materials are usually observed with a complexity comparable with those of theoretical kinetic equations. Analysis of the components of the kinetics is quite important for the development of advanced materials. However, with a limited number of exceptions, deconvolution of the observed kinetics has not yet been mathematically solved. In this study, we propose a mathematical optimization approach for estimating the quantum yield distribution of a photochromic reaction in a polymer. In the proposed approach, time-series data of absorbances are acquired and an estimate of the quantum yield distribution is obtained. To estimate the distribution, we solve a mathematical optimization problem to minimize the difference between the input data and a model. This optimization problem involves a differential equation constrained on a functional space as the variable lies in the space of probability distribution functions and the constraints arise from reaction rate equations. This problem can be reformulated as a convex quadratic optimization problem and can be efficiently solved by discretization. Numerical results are also reported here, and they verify the effectiveness of our approach.

  5. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  6. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  7. Product Distributions for Distributed Optimization. Chapter 1

    NASA Technical Reports Server (NTRS)

    Bieniawski, Stefan R.; Wolpert, David H.

    2004-01-01

    With connections to bounded rational game theory, information theory and statistical mechanics, Product Distribution (PD) theory provides a new framework for performing distributed optimization. Furthermore, PD theory extends and formalizes Collective Intelligence, thus connecting distributed optimization to distributed Reinforcement Learning (FU). This paper provides an overview of PD theory and details an algorithm for performing optimization derived from it. The approach is demonstrated on two unconstrained optimization problems, one with discrete variables and one with continuous variables. To highlight the connections between PD theory and distributed FU, the results are compared with those obtained using distributed reinforcement learning inspired optimization approaches. The inter-relationship of the techniques is discussed.

  8. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  9. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  10. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  11. A new systems approach to optimizing investments in gas production and distribution

    SciTech Connect

    Dougherty, E.L.

    1983-03-01

    This paper presents a new analytical approach for determining the optimal sequence of investments to make in each year of an extended planning horizon in each of a group of reservoirs producing gas and gas liquids through an interconnected trunkline network and a gas processing plant. The optimality criterion is to maximize net present value while satisfying fixed offtake requirements for dry gas, but with no limits on gas liquids production. The planning problem is broken into n + 2 separate but interrelated subproblems; gas reservoir development and production, gas flow in a trunkline gathering system, and plant separation activities to remove undesirable gas (CO/sub 2/) or to recover valuable liquid components. The optimal solution for each subproblem depends upon the optimal solutions for all of the other subproblems, so that the overall optimal solution is obtained iteratively. The iteration technique used is based upon a combination of heuristics and the decompostion algorithm of mathematical programming. Each subproblem is solved once during each overall iteration. In addition to presenting some mathematical details of the solution approach, this paper describes a computer system which has been developed to obtain solutions.

  12. Optimizing booster chlorination in water distribution networks: a water quality index approach.

    PubMed

    Islam, Nilufar; Sadiq, Rehan; Rodriguez, Manuel J

    2013-10-01

    The optimization of chlorine dosage and the number of booster locations is an important aspect of water quality management in distribution networks. Booster chlorination helps to maintain uniformity and adequacy of free residual chlorine concentration, essential for safeguarding against microbiological contamination. Higher chlorine dosages increase free residual chlorine concentration but generate harmful by-products, in addition to taste and odor complaints. It is possible to address these microbial, chemical, and aesthetic water quality issues through free residual chlorine concentration. Estimating a water quality index (WQI) based on regulatory chlorine thresholds for microbial, chemical, and aesthetics criteria can help engineers make intelligent decisions. An innovative scheme for maintaining adequate residual chlorine with optimal chlorine dosages and numbers of booster locations was established based on a proposed WQI. The City of Kelowna (BC, Canada) water distribution network served to demonstrate the application of the proposed scheme. Temporal free residual chlorine concentration predicted with EPANET software was used to estimate the WQI, later coupled with an optimization scheme. Preliminary temporal and spatial analyses identified critical zones (relatively poor water quality) in the distribution network. The model may also prove useful for small or rural communities where free residual chlorine is considered as the only water quality criterion.

  13. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  14. Anthropogenic carbon estimates in the Weddell Sea using an optimized CFC based transit time distribution approach

    NASA Astrophysics Data System (ADS)

    Huhn, Oliver; Hauck, Judith; Hoppema, Mario; Rhein, Monika; Roether, Wolfgang

    2010-05-01

    We use a 20 year time series of chlorofluorocarbon (CFC) observations along the Prime Meridian to determine the temporal evolution of anthropogenic carbon (Cant) in the two deep boundary currents which enter the Weddell Basin in the south and leave it in the north. The Cant is inferred from transit time distributions (TTDs), with parameters (mean transit time and dispersion) adjusted to the observed mean CFC histories in these recently ventilated deep boundary currents. We optimize that "classic" TTD approach by accounting for water exchange of the boundary currents with an old but not CFC and Cant free interior reservoir. This reservoir in turn, is replenished by the boundary currents, which we parameterize as first order mixing. Furthermore, we account for the time-dependence of the CFC and Cant source water saturation. A conceptual model of an ideal saturated mixed layer and exchange with adjacent water is adjusted to observed CFC saturations in the source regions. The time-dependence for the CFC saturation appears to be much weaker than for Cant. We find a mean transit time of 14 years and an advection/dispersion ratio of 5 for the deep southern boundary current. For the northern boundary current we find a mean transit time of 8 years and a much advection/dispersion ratio of 140. The fractions directly supplied by the boundary currents are in both cases in the order of 10%, while 90% are admixed from the interior reservoirs, which are replenished with a renewal time of about 14 years. We determine Cant ~ 11 umol/kg (reference year 2006) in the deep water entering the Weddell Sea in the south (~2.1 Sv), and 12 umol/kg for the deep water leaving the Weddell Sea in the north (~2.7 Sv). These Cant estimates are, however, upper limits, considering that the Cant source water saturation is likely to be lower than that for the CFCs. Comparison with Cant intrusion estimates based on extended multiple linear regression (using potential temperature, salinity, oxygen, and

  15. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  16. Dealing with noisy absences to optimize species distribution models: an iterative ensemble modelling approach.

    PubMed

    Lauzeral, Christine; Grenouillet, Gaël; Brosse, Sébastien

    2012-01-01

    Species distribution models (SDMs) are widespread in ecology and conservation biology, but their accuracy can be lowered by non-environmental (noisy) absences that are common in species occurrence data. Here we propose an iterative ensemble modelling (IEM) method to deal with noisy absences and hence improve the predictive reliability of ensemble modelling of species distributions. In the IEM approach, outputs of a classical ensemble model (EM) were used to update the raw occurrence data. The revised data was then used as input for a new EM run. This process was iterated until the predictions stabilized. The outputs of the iterative method were compared to those of the classical EM using virtual species. The IEM process tended to converge rapidly. It increased the consensus between predictions provided by the different methods as well as between those provided by different learning data sets. Comparing IEM and EM showed that for high levels of non-environmental absences, iterations significantly increased prediction reliability measured by the Kappa and TSS indices, as well as the percentage of well-predicted sites. Compared to EM, IEM also reduced biases in estimates of species prevalence. Compared to the classical EM method, IEM improves the reliability of species predictions. It particularly deals with noisy absences that are replaced in the data matrices by simulated presences during the iterative modelling process. IEM thus constitutes a promising way to increase the accuracy of EM predictions of difficult-to-detect species, as well as of species that are not in equilibrium with their environment.

  17. Dealing with Noisy Absences to Optimize Species Distribution Models: An Iterative Ensemble Modelling Approach

    PubMed Central

    Lauzeral, Christine; Grenouillet, Gaël; Brosse, Sébastien

    2012-01-01

    Species distribution models (SDMs) are widespread in ecology and conservation biology, but their accuracy can be lowered by non-environmental (noisy) absences that are common in species occurrence data. Here we propose an iterative ensemble modelling (IEM) method to deal with noisy absences and hence improve the predictive reliability of ensemble modelling of species distributions. In the IEM approach, outputs of a classical ensemble model (EM) were used to update the raw occurrence data. The revised data was then used as input for a new EM run. This process was iterated until the predictions stabilized. The outputs of the iterative method were compared to those of the classical EM using virtual species. The IEM process tended to converge rapidly. It increased the consensus between predictions provided by the different methods as well as between those provided by different learning data sets. Comparing IEM and EM showed that for high levels of non-environmental absences, iterations significantly increased prediction reliability measured by the Kappa and TSS indices, as well as the percentage of well-predicted sites. Compared to EM, IEM also reduced biases in estimates of species prevalence. Compared to the classical EM method, IEM improves the reliability of species predictions. It particularly deals with noisy absences that are replaced in the data matrices by simulated presences during the iterative modelling process. IEM thus constitutes a promising way to increase the accuracy of EM predictions of difficult-to-detect species, as well as of species that are not in equilibrium with their environment. PMID:23166691

  18. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  19. Distributed Energy Resource Optimization Using a Software as Service (SaaS) Approach at the University of California, Davis Campus

    SciTech Connect

    Stadler, Michael; Marnay, Chris; Donadee, Jon; Lai, Judy; Megel, Olivier; Bhattacharya, Prajesh; Siddiqui, Afzal

    2011-02-06

    Together with OSIsoft LLC as its private sector partner and matching sponsor, the Lawrence Berkeley National Laboratory (Berkeley Lab) won an FY09 Technology Commercialization Fund (TCF) grant from the U.S. Department of Energy. The goal of the project is to commercialize Berkeley Lab's optimizing program, the Distributed Energy Resources Customer Adoption Model (DER-CAM) using a software as a service (SaaS) model with OSIsoft as its first non-scientific user. OSIsoft could in turn provide optimization capability to its software clients. In this way, energy efficiency and/or carbon minimizing strategies could be made readily available to commercial and industrial facilities. Specialized versions of DER-CAM dedicated to solving OSIsoft's customer problems have been set up on a server at Berkeley Lab. The objective of DER-CAM is to minimize the cost of technology adoption and operation or carbon emissions, or combinations thereof. DER-CAM determines which technologies should be installed and operated based on specific site load, price information, and performance data for available equipment options. An established user of OSIsoft's PI software suite, the University of California, Davis (UCD), was selected as a demonstration site for this project. UCD's participation in the project is driven by its motivation to reduce its carbon emissions. The campus currently buys electricity economically through the Western Area Power Administration (WAPA). The campus does not therefore face compelling cost incentives to improve the efficiency of its operations, but is nonetheless motivated to lower the carbon footprint of its buildings. Berkeley Lab attempted to demonstrate a scenario wherein UCD is forced to purchase electricity on a standard time-of-use tariff from Pacific Gas and Electric (PG&E), which is a concern to Facilities staff. Additionally, DER-CAM has been set up to consider the variability of carbon emissions throughout the day and seasons. Two distinct analyses of

  20. Optimizing influenza vaccine distribution.

    PubMed

    Medlock, Jan; Galvani, Alison P

    2009-09-25

    The criteria to assess public health policies are fundamental to policy optimization. Using a model parametrized with survey-based contact data and mortality data from influenza pandemics, we determined optimal vaccine allocation for five outcome measures: deaths, infections, years of life lost, contingent valuation, and economic costs. We find that optimal vaccination is achieved by prioritization of schoolchildren and adults aged 30 to 39 years. Schoolchildren are most responsible for transmission, and their parents serve as bridges to the rest of the population. Our results indicate that consideration of age-specific transmission dynamics is paramount to the optimal allocation of influenza vaccines. We also found that previous and new recommendations from the U.S. Centers for Disease Control and Prevention both for the novel swine-origin influenza and, particularly, for seasonal influenza, are suboptimal for all outcome measures.

  1. Retrieval of ice crystals' mass from ice water content and particle distribution measurements: a numerical optimization approach

    NASA Astrophysics Data System (ADS)

    Coutris, Pierre; Leroy, Delphine; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    A new method to retrieve cloud water content from in-situ measured 2D particle images from optical array probes (OAP) is presented. With the overall objective to build a statistical model of crystals' mass as a function of their size, environmental temperature and crystal microphysical history, this study presents the methodology to retrieve the mass of crystals sorted by size from 2D images using a numerical optimization approach. The methodology is validated using two datasets of in-situ measurements gathered during two airborne field campaigns held in Darwin, Australia (2014), and Cayenne, France (2015), in the frame of the High Altitude Ice Crystals (HAIC) / High Ice Water Content (HIWC) projects. During these campaigns, a Falcon F-20 research aircraft equipped with state-of-the art microphysical instrumentation sampled numerous mesoscale convective systems (MCS) in order to study dynamical and microphysical properties and processes of high ice water content areas. Experimentally, an isokinetic evaporator probe, referred to as IKP-2, provides a reference measurement of the total water content (TWC) which equals ice water content, (IWC) when (supercooled) liquid water is absent. Two optical array probes, namely 2D-S and PIP, produce 2D images of individual crystals ranging from 50 μm to 12840 μm from which particle size distributions (PSD) are derived. Mathematically, the problem is formulated as an inverse problem in which the crystals' mass is assumed constant over a size class and is computed for each size class from IWC and PSD data: PSD.m = IW C This problem is solved using numerical optimization technique in which an objective function is minimized. The objective function is defined as follows: 2 J(m)=∥P SD.m - IW C ∥ + λ.R (m) where the regularization parameter λ and the regularization function R(m) are tuned based on data characteristics. The method is implemented in two steps. First, the method is developed on synthetic crystal populations in

  2. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    PubMed

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet

  3. Multicriteria optimization of the spatial dose distribution

    SciTech Connect

    Schlaefer, Alexander; Viulet, Tiberiu; Muacevic, Alexander; Fürweger, Christoph

    2013-12-15

    Purpose: Treatment planning for radiation therapy involves trade-offs with respect to different clinical goals. Typically, the dose distribution is evaluated based on few statistics and dose–volume histograms. Particularly for stereotactic treatments, the spatial dose distribution represents further criteria, e.g., when considering the gradient between subregions of volumes of interest. The authors have studied how to consider the spatial dose distribution using a multicriteria optimization approach.Methods: The authors have extended a stepwise multicriteria optimization approach to include criteria with respect to the local dose distribution. Based on a three-dimensional visualization of the dose the authors use a software tool allowing interaction with the dose distribution to map objectives with respect to its shape to a constrained optimization problem. Similarly, conflicting criteria are highlighted and the planner decides if and where to relax the shape of the dose distribution.Results: To demonstrate the potential of spatial multicriteria optimization, the tool was applied to a prostate and meningioma case. For the prostate case, local sparing of the rectal wall and shaping of a boost volume are achieved through local relaxations and while maintaining the remaining dose distribution. For the meningioma, target coverage is improved by compromising low dose conformality toward noncritical structures. A comparison of dose–volume histograms illustrates the importance of spatial information for achieving the trade-offs.Conclusions: The results show that it is possible to consider the location of conflicting criteria during treatment planning. Particularly, it is possible to conserve already achieved goals with respect to the dose distribution, to visualize potential trade-offs, and to relax constraints locally. Hence, the proposed approach facilitates a systematic exploration of the optimal shape of the dose distribution.

  4. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids

    SciTech Connect

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-11-21

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flow equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.

  5. Flocking in Distributed Control and Optimization

    DTIC Science & Technology

    2015-06-01

    of the agents are nonlinear, nonidentical, unknown, and subject to external disturbances. Distributed neural networks are used to approximate the...convex nutrient profiles. These results suggest that swarming-like approaches for the control of networked agents may provide an additional level of...AFRL-AFOSR-VA-TR-2015-0309 Flocking in Distributed Control and Optimization Alfredo Garcia UNIVERSITY OF VIRGINIA Final Report 06/01/2015

  6. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  7. Distributed optimization and flight control using collectives

    NASA Astrophysics Data System (ADS)

    Bieniawski, Stefan Richard

    The increasing complexity of aerospace systems demands new approaches for their design and control. Approaches are required to address the trend towards aerospace systems comprised of a large number of inherently distributed and highly nonlinear components with complex and sometimes competing interactions. This work introduces collectives to address these challenges. Although collectives have been used for distributed optimization problems in computer science, recent developments based upon Probability Collectives (PC) theory enhance their applicability to discrete, continuous, mixed, and constrained optimization problems. Further, they are naturally applied to distributed systems and those involving uncertainty, such as control in the presence of noise and disturbances. This work describes collectives theory and its implementation, including its connections to multi-agent systems, machine learning, statistics, and gradient-based optimization. To demonstrate the approach, two experiments were developed. These experiments built upon recent advances in actuator technology that resulted in small, simple flow control devices. Miniature-Trailing Edge Effectors (MiTE), consisting of a small, 1-5% chord, moveable surface mounted at the wing trailing edge, are used for the experiments. The high bandwidth, distributed placement, and good control authority make these ideal candidates for rigid and flexible mode control of flight vehicles. This is demonstrated in two experiments: flutter suppression of a flexible wing, and flight control of a remotely piloted aircraft. The first experiment successfully increased the flutter speed by over 25%. The second experiment included a novel distributed flight control system based upon the MiTEs that includes distributed sensing, logic, and actuation. Flight tests validated the control capability of the MiTEs and the associated flight control architecture. The collectives approach was used to design controllers for the distributed

  8. Wave based optimization of distributed vibration absorbers

    NASA Astrophysics Data System (ADS)

    Johnson, Marty; Batton, Brad

    2005-09-01

    The concept of distributed vibration absorbers or DVAs has been investigated in recent years as a method of vibration control and sound radiation control for large flexible structures. These devices are comprised of a distributed compliant layer with a distributed mass layer. When such a device is placed onto a structure it forms a sandwich panel configuration with a very soft core. With this configuration the main effect of the DVA is to create forces normal to the surface of the structure and can be used at low frequencies to either add damping, where constrain layer damper treatments are not very effective, or to pin the structure over a narrow frequency bandwidth (i.e., large input impedance/vibration absorber approach). This paper analyses the behavior of these devices using a wave based approach and finds an optimal damping level for the control of broadband disturbances in panels. The optimal design is calculated by solving the differential equations for waves propagating in coupled plates. It is shown that the optimal damping calculated using the infinite case acts as a good ``rule of thumb'' for designing DVAs to control the vibration of finite panels. This is bourn out in both numerical simulations and experiments.

  9. Hybrid centralized pre-computing/local distributed optimization of shared disjoint-backup path approach to GMPLS optical mesh network intelligent restoration

    NASA Astrophysics Data System (ADS)

    Gong, Qian; Xu, Rong; Lin, Jintong

    2004-04-01

    Wavelength Division Multiplexed (WDM) networks that route optical connections using intelligent optical cross-connects (OXCs) is firmly established as the core constituent of next generation networks. Rapid failure recovery is fundamental to building reliable transport networks. Mesh restoration promises cost effective failure recovery compared with legacy ring networks, and is now seeing large-scale deployment. Many carriers are migrating away from SONET ring restoration for their core transport networks and replacing it with mesh restoration through "intelligent" O-E-O cross-connects (XC). The mesh restoration is typically provided via two fiber-disjoint paths: a service path and a restoration path. this scheme can restore any single link failure or node failure. And by used shared mesh restoration, although every service route is assigned a restoration route, no dedicated capacity needs to be reserved for the restoration route, resulting in capacity savings. The restoration approach we propose is Centralized Pre-computing, Local Distributed Optimization, and Shared Disjoint-backup Path. This approach combines the merits of centralized and distributed solutions. It avoids the scalability issues of centralized solutions by using a distributed control plane for disjoint service path computation and restoration path provisioning. Moreover, if the service routes of two demands are disjoint, no single failure will affect both demands simultaneously. This means that the restoration routes of these two demands can share link capacities, because these two routes will not be activated at the same time. So we can say, this restoration capacity sharing approach achieves low restoration capacity and fast restoration speed, while requiring few control plane changes.

  10. Coordinated Optimization of Distributed Energy Resources and Smart Loads in Distribution Systems: Preprint

    SciTech Connect

    Yang, Rui; Zhang, Yingchen

    2016-08-01

    Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder and results illustrate the superior control performance of the proposed approach.

  11. Coordinated Optimization of Distributed Energy Resources and Smart Loads in Distribution Systems

    SciTech Connect

    Yang, Rui; Zhang, Yingchen

    2016-11-14

    Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder and results illustrate the superior control performance of the proposed approach.

  12. Distributed Databases: The Adaptable Approach.

    ERIC Educational Resources Information Center

    Braniff, Thomas A.

    1978-01-01

    Distributed data bases in statewide and multi-institutional systems are discussed. It is suggested that traditional approaches to data processing and current data base software are inappropriate for a distributed data base system. (BH)

  13. Assessment of Optimal Interrogation Approaches

    DTIC Science & Technology

    2007-05-01

    7540 Pickens Avenue Fort Jackson, SC 29207 DACA DACA08-R-0001 Public Release In March 2006, Department of Defense Polygraph Institute (DoDPI) [now the...Defense Academy for Credibility Assessment ( DACA )] Research Division requested research to determine the optimal approaches or techniques used by an...interrogator. Specifically, DACA wanted the researchers to gather information from "expert" interrogators (referred to as "superior" interrogators

  14. Remediation Optimization: Definition, Scope and Approach

    EPA Pesticide Factsheets

    This document provides a general definition, scope and approach for conducting optimization reviews within the Superfund Program and includes the fundamental principles and themes common to optimization.

  15. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  16. Optimal forager against ideal free distributed prey.

    PubMed

    Garay, József; Cressman, Ross; Xu, Fei; Varga, Zoltan; Cabello, Tomás

    2015-07-01

    The introduced dispersal-foraging game is a combination of prey habitat selection between two patch types and optimal-foraging approaches. Prey's patch preference and forager behavior determine the prey's survival rate. The forager's energy gain depends on local prey density in both types of exhaustible patches and on leaving time. We introduce two game-solution concepts. The static solution combines the ideal free distribution of the prey with optimal-foraging theory. The dynamical solution is given by a game dynamics describing the behavioral changes of prey and forager. We show (1) that each stable equilibrium dynamical solution is always a static solution, but not conversely; (2) that at an equilibrium dynamical solution, the forager can stabilize prey mixed patch use strategy in cases where ideal free distribution theory predicts that prey will use only one patch type; and (3) that when the equilibrium dynamical solution is unstable at fixed prey density, stable behavior cycles occur where neither forager nor prey keep a fixed behavior.

  17. Privacy Preservation in Distributed Subgradient Optimization Algorithms.

    PubMed

    Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng

    2017-07-31

    In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.

  18. Fidelity Optimization in Distributed Virtual Environments

    DTIC Science & Technology

    2000-06-01

    user experience possible. This dissertation shows that is possible to increase the scalability of distributed virtual environments (DVEs), in a tractable fashion, through a novel application of optimization techniques. Fidelity is maximized by utilizing the given display and network capacity in an optimal fashion, individually tuned for multiple users, in a manner most appropriate to a specific DVE application. This optimization is accomplished using the QUICK framework for managing the display and request of representations for virtual objects. Ratings of representation

  19. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  20. Quantum optimal control of photoelectron spectra and angular distributions

    NASA Astrophysics Data System (ADS)

    Goetz, R. Esteban; Karamatskou, Antonia; Santra, Robin; Koch, Christiane P.

    2016-01-01

    Photoelectron spectra and photoelectron angular distributions obtained in photoionization reveal important information on, e.g., charge transfer or hole coherence in the parent ion. Here we show that optimal control of the underlying quantum dynamics can be used to enhance desired features in the photoelectron spectra and angular distributions. To this end, we combine Krotov's method for optimal control theory with the time-dependent configuration interaction singles formalism and a splitting approach to calculate photoelectron spectra and angular distributions. The optimization target can account for specific desired properties in the photoelectron angular distribution alone, in the photoelectron spectrum, or in both. We demonstrate the method for hydrogen and then apply it to argon under strong XUV radiation, maximizing the difference of emission into the upper and lower hemispheres, in order to realize directed electron emission in the XUV regime.

  1. Energy optimization of water distribution systems

    SciTech Connect

    1994-09-01

    Energy costs associated with pumping treated water into the distribution system and boosting water pressures where necessary is one of the largest expenditures in the operating budget of a municipality. Due to the size and complexity of Detroit`s water transmission system, an energy optimization project has been developed to better manage the flow of water in the distribution system in an attempt to reduce these costs.

  2. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  3. Probabilistic-based approach to optimal filtering

    PubMed

    Hannachi

    2000-04-01

    The signal-to-noise ratio maximizing approach in optimal filtering provides a robust tool to detect signals in the presence of colored noise. The method fails, however, when the data present a regimelike behavior. An approach is developed in this manuscript to recover local (in phase space) behavior in an intermittent regimelike behaving system. The method is first formulated in its general form within a Gaussian framework, given an estimate of the noise covariance, and demands that the signal corresponds to minimizing the noise probability distribution for any given value, i.e., on isosurfaces, of the data probability distribution. The extension to the non-Gaussian case is provided through the use of finite mixture models for data that show regimelike behavior. The method yields the correct signal when applied in a simplified manner to synthetic time series with and without regimes, compared to the signal-to-noise ratio approach, and helps identify the right frequency of the oscillation spells in the classical and variants of the Lorenz system.

  4. Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint

    SciTech Connect

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-01

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative bounds that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.

  5. Optimal Power Flow for Distribution Systems under Uncertain Forecasts

    SciTech Connect

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-29

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative bounds that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.

  6. Optimization of the materials distribution in composite systems

    NASA Astrophysics Data System (ADS)

    Poteralski, A.; Szczepanik, M.

    2016-11-01

    The optimization of structures in macro scale is widely used nowadays. The goal of the paper is to apply optimization techniques to obtain better performance on the micro level. The presented methods open new possibilities. The structures build with the use of materials with optimal microstructure can obtain the best performance. The microstructure can be optimized taking into account loads of the macro structure. The optimization of microstructure is not easy currently, but in future, in applications where performance of the structure is very important, the presented approach may be used with success. A bio-inspired method based on the artificial immune system (AIS) is used to solve the optimization problem. Immune computing provides a great probability of finding the global optimum. The optimal topology is generated by the level set approach. Optimization (identification) of the topology and the distribution of the mass density of microstructure considered to two materials by the minimization of the fitness function which depended on the coefficients of stiffness matrices. The paper presents methodology, algorithm of optimization and numerical examples.

  7. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  8. An artificial immune system algorithm approach for reconfiguring distribution network

    NASA Astrophysics Data System (ADS)

    Syahputra, Ramadoni; Soesanti, Indah

    2017-08-01

    This paper proposes an artificial immune system (AIS) algorithm approach for reconfiguring distribution network with the presence distributed generators (DG). The distribution network with high-performance is a network that has a low power loss, better voltage profile, and loading balance among feeders. The task for improving the performance of the distribution network is optimization of network configuration. The optimization has become a necessary study with the presence of DG in entire networks. In this work, optimization of network configuration is based on an AIS algorithm. The methodology has been tested in a model of 33 bus IEEE radial distribution networks with and without DG integration. The results have been showed that the optimal configuration of the distribution network is able to reduce power loss and to improve the voltage profile of the distribution network significantly.

  9. Optimal Device Independent Quantum Key Distribution

    PubMed Central

    Kamaruddin, S.; Shaari, J. S.

    2016-01-01

    We consider an optimal quantum key distribution setup based on minimal number of measurement bases with binary yields used by parties against an eavesdropper limited only by the no-signaling principle. We note that in general, the maximal key rate can be achieved by determining the optimal tradeoff between measurements that attain the maximal Bell violation and those that maximise the bit correlation between the parties. We show that higher correlation between shared raw keys at the expense of maximal Bell violation provide for better key rates for low channel disturbance. PMID:27485160

  10. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  11. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider

  12. Optimal loss reduction of distribution networks

    SciTech Connect

    Glamocanin, V. )

    1990-08-01

    A new algorithm for network reconfiguration of power distribution systems is presented. An optimal loss reduction is accomplished to maintain acceptable voltage at customer loads as well as to assure sufficient conductor and substation current capacity to handle load requirements. The success of the algorithm depends directly upon the straightforward and highly-efficient solution of quadratic cost transshipment problem. The new algorithm, described in this paper, completely eliminates the need for matrix operations and executes all operations directly on graph of the distribution system.

  13. Optimal but unequitable prophylactic distribution of vaccine

    PubMed Central

    Keeling, Matt J.; Shattock, Andrew

    2012-01-01

    The final epidemic size (R∞) remains one of the fundamental outcomes of an epidemic, and measures the total number of individuals infected during a “free-fall” epidemic when no additional control action is taken. As such, it provides an idealised measure for optimising control policies before an epidemic arises. Although the generality of formulae for calculating the final epidemic size have been discussed previously, we offer an alternative probabilistic argument and then use this formula to consider the optimal deployment of vaccine in spatially segregated populations that minimises the total number of cases. We show that for a limited stockpile of vaccine, the optimal policy is often to immunise one population to the exclusion of others. However, as greater realism is included, this extreme and arguably unethical policy, is replaced by an optimal strategy where vaccine supply is more evenly spatially distributed. PMID:22664066

  14. A flexible approach to distributed data anonymization.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Eckert, Claudia; Kuhn, Klaus A

    2014-08-01

    Sensitive biomedical data is often collected from distributed sources, involving different information systems and different organizational units. Local autonomy and legal reasons lead to the need of privacy preserving integration concepts. In this article, we focus on anonymization, which plays an important role for the re-use of clinical data and for the sharing of research data. We present a flexible solution for anonymizing distributed data in the semi-honest model. Prior to the anonymization procedure, an encrypted global view of the dataset is constructed by means of a secure multi-party computing (SMC) protocol. This global representation can then be anonymized. Our approach is not limited to specific anonymization algorithms but provides pre- and postprocessing for a broad spectrum of algorithms and many privacy criteria. We present an extensive analytical and experimental evaluation and discuss which types of methods and criteria are supported. Our prototype demonstrates the approach by implementing k-anonymity, ℓ-diversity, t-closeness and δ-presence with a globally optimal de-identification method in horizontally and vertically distributed setups. The experiments show that our method provides highly competitive performance and offers a practical and flexible solution for anonymizing distributed biomedical datasets.

  15. Optimized approach to retrieve information on the tropospheric and stratospheric carbonyl sulfide (OCS) vertical distributions above Jungfraujoch from high-resolution FTIR solar spectra.

    NASA Astrophysics Data System (ADS)

    Lejeune, Bernard; Mahieu, Emmanuel; Servais, Christian; Duchatelet, Pierre; Demoulin, Philippe

    2010-05-01

    Carbonyl sulfide (OCS), which is produced in the troposphere from both biogenic and anthropogenic sources, is the most abundant gaseous sulfur species in the unpolluted atmosphere. Due to its low chemical reactivity and water solubility, a significant fraction of OCS is able to reach the stratosphere where it is converted to SO2 and ultimately to H2SO4 aerosols (Junge layer). These aerosols have the potential to amplify stratospheric ozone destruction on a global scale and may influence Earth's radiation budget and climate through increasing solar scattering. The transport of OCS from troposphere to stratosphere is thought to be the primary mechanism by which the Junge layer is sustained during nonvolcanic periods. Because of this, long-term trends in atmospheric OCS concentration, not only in the troposphere but also in the stratosphere, are of great interest. A new approach has been developed and optimized to retrieve atmospheric abundance of OCS from high-resolution ground-based infrared solar spectra by using the SFIT-2 (v3.91) algorithm, including a new model for solar lines simulation (solar lines often produce significant interferences in the OCS microwindows). The strongest lines of the ν3 fundamental band of OCS at 2062 cm-1 have been systematically evaluated with objective criteria to select a new set of microwindows, assuming the HITRAN 2004 spectroscopic parameters with an increase in the OCS line intensities of the ν3band main isotopologue 16O12C32S by 15.79% as compared to HITRAN 2000 (Rothman et al., 2008, and references therein). Two regularization schemes have further been compared (deducted from ATMOS and ACE-FTS measurements or based on a Tikhonov approach), in order to select the one which optimizes the information content while minimizing the error budget. The selected approach has allowed us to determine updated OCS long-term trend from 1988 to 2009 in both the troposphere and the stratosphere, using spectra recorded on a regular basis with

  16. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    SciTech Connect

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flow equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.

  17. A Unified Approach to Optimization

    DTIC Science & Technology

    2014-10-02

    and dynamic programming, logic-based Benders decomposition, and unification of exact and heuristic methods. The publications associated with this...Logic-Based Benders Decomposition Logic-based Benders decomposition (LBBD) has been used for some years to combine CP and MIP, usually by solving the...classical Benders decomposition, but can be any optimization problem. Benders cuts are generated by solving the inference dual of the subproblem

  18. Numerical approach for unstructured quantum key distribution

    PubMed Central

    Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert

    2016-01-01

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study ‘unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739

  19. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  20. Optimal smoothing of site-energy distributions from adsorption isotherms

    SciTech Connect

    Brown, L.F.; Travis, B.J.

    1983-01-01

    The equation for the adsorption isotherm on a heterogeneous surface is a Fredholm integral equation. In solving it for the site-energy distribution (SED), some sort of smoothing must be carried out. The optimal amount of smoothing will give the most information that is possible without introducing nonexistent structure into the SED. Recently, Butler, Reeds, and Dawson proposed a criterion (the BRD criterion) for choosing the optimal smoothing parameter when using regularization to solve Fredholm equations. The BRD criterion is tested for its suitability in obtaining optimal SED's. This criterion is found to be too conservative. While using it never introduces nonexistent structure into the SED, significant information is often lost. At present, no simple criterion for choosing the optimal smoothing parameter exists, and a modeling approach is recommended.

  1. Scalable Optimization Methods for Distribution Networks With High PV Integration

    SciTech Connect

    Guggilam, Swaroop S.; Dall'Anese, Emiliano; Chen, Yu Christine; Dhople, Sairaj V.; Giannakis, Georgios B.

    2016-07-01

    This paper proposes a suite of algorithms to determine the active- and reactive-power setpoints for photovoltaic (PV) inverters in distribution networks. The objective is to optimize the operation of the distribution feeder according to a variety of performance objectives and ensure voltage regulation. In general, these algorithms take a form of the widely studied ac optimal power flow (OPF) problem. For the envisioned application domain, nonlinear power-flow constraints render pertinent OPF problems nonconvex and computationally intensive for large systems. To address these concerns, we formulate a quadratic constrained quadratic program (QCQP) by leveraging a linear approximation of the algebraic power-flow equations. Furthermore, simplification from QCQP to a linearly constrained quadratic program is provided under certain conditions. The merits of the proposed approach are demonstrated with simulation results that utilize realistic PV-generation and load-profile data for illustrative distribution-system test feeders.

  2. Optimal design of spatial distribution networks

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.; Newman, M. E. J.

    2006-07-01

    We consider the problem of constructing facilities such as hospitals, airports, or malls in a country with a nonuniform population density, such that the average distance from a person’s home to the nearest facility is minimized. We review some previous approximate treatments of this problem that indicate that the optimal distribution of facilities should have a density that increases with population density, but does so slower than linearly, as the two-thirds power. We confirm this result numerically for the particular case of the United States with recent population data using two independent methods, one a straightforward regression analysis, the other based on density-dependent map projections. We also consider strategies for linking the facilities to form a spatial network, such as a network of flights between airports, so that the combined cost of maintenance of and travel on the network is minimized. We show specific examples of such optimal networks for the case of the United States.

  3. Automatic Distribution Network Reconfiguration: An Event-Driven Approach

    SciTech Connect

    Ding, Fei; Jiang, Huaiguang; Tan, Jin

    2016-11-14

    This paper proposes an event-driven approach for reconfiguring distribution systems automatically. Specifically, an optimal synchrophasor sensor placement (OSSP) is used to reduce the number of synchrophasor sensors while keeping the whole system observable. Then, a wavelet-based event detection and location approach is used to detect and locate the event, which performs as a trigger for network reconfiguration. With the detected information, the system is then reconfigured using the hierarchical decentralized approach to seek for the new optimal topology. In this manner, whenever an event happens the distribution network can be reconfigured automatically based on the real-time information that is observable and detectable.

  4. Distributed Optimization Design of Continuous-Time Multiagent Systems With Unknown-Frequency Disturbances.

    PubMed

    Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu

    2017-05-24

    In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.

  5. Optimal online learning: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Solla, Sara A.; Winther, Ole

    1999-09-01

    A recently proposed Bayesian approach to online learning is applied to learning a rule defined as a noisy single layer perceptron. In the Bayesian online approach, the exact posterior distribution is approximated by a simple parametric posterior that is updated online as new examples are incorporated to the dataset. In the case of binary weights, the approximate posterior is chosen to be a biased binary distribution. The resulting online algorithm is shown to outperform several other online approaches to this problem.

  6. Optimizing Distribution of Pandemic Influenza Antiviral Drugs

    PubMed Central

    Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.

    2015-01-01

    We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858

  7. Material Distribution Optimization for the Shell Aircraft Composite Structure

    NASA Astrophysics Data System (ADS)

    Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.

    2016-09-01

    One of the main goal in aircraft structures designing isweight decreasing and stiffness increasing. Composite structures recently became popular in aircraft because of their mechanical properties and wide range of optimization possibilities.Weight distribution and lay-up are keys to creating lightweight stiff strictures. In this paperwe discuss optimization of specific structure that undergoes the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflowinduced vibrations at the constrained weight of the part. Initial model was created with CAD tool Siemens NX, finite element analysis and post processing were performed with COMSOL Multiphysicsr and MATLABr. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. Wall thickness has been changed using parametric approach by an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. To avoid a local stress concentration, wall thickness increment was defined as smooth function on the shell surface dependent of auxiliary sphere position and size. Our study consists of multiple steps: CAD/CAE transformation of the model, determining wind pressure for different flow angles, optimizing wall thickness distribution for specific flow angles, designing a lay-up for optimal material distribution. The studied structure was improved in terms of maximum and average strain energy at the constrained expense ofweight growth. Developed methods and tools can be applied to wide range of shell-like structures made of multilayered quasi-isotropic laminates.

  8. Optimizing the Distribution of Leg Muscles for Vertical Jumping.

    PubMed

    Wong, Jeremy D; Bobbert, Maarten F; van Soest, Arthur J; Gribble, Paul L; Kistemaker, Dinant A

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas-which determine the maximum force deliverable by the muscles-constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal

  9. Optimizing the Distribution of Leg Muscles for Vertical Jumping

    PubMed Central

    Wong, Jeremy D.; Bobbert, Maarten F.; van Soest, Arthur J.; Gribble, Paul L.; Kistemaker, Dinant A.

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas—which determine the maximum force deliverable by the muscles—constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal

  10. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    SciTech Connect

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.; Giannakis, Georgios B.

    2015-10-05

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this paper shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.

  11. Electricity distribution networks: Changing regulatory approaches

    NASA Astrophysics Data System (ADS)

    Cambini, Carlo

    2016-09-01

    Increasing the penetration of distributed generation and smart grid technologies requires substantial investments. A study proposes an innovative approach that combines four regulatory tools to provide economic incentives for distribution system operators to facilitate these innovative practices.

  12. Optimal rigid and porous material distributions for noise barrier by acoustic topology optimization

    NASA Astrophysics Data System (ADS)

    Kim, Ki Hyun; Yoon, Gil Ho

    2015-03-01

    This research applies acoustic topology optimization (ATO) for noise barrier design with rigid and porous materials. Many researchers have investigated the pressure attenuation phenomena of noise barriers under various geometric, material, and boundary conditions. To improve the pressure attenuation performance of noise barriers, size and shape optimization have been applied, and ATO methods have been proposed that allow concurrent size, shape, and topological changes of rigid walls and cavities. Nevertheless, it is unusual to optimize the topologies of noise barriers by considering the pressure attenuation effect of a porous material. The present research develops a new ATO considering both porous and rigid materials and applies it to the discovery of optimal topologies of noise barriers composed of both materials. In the present approach, the noise absorption characteristics of porous materials are numerically modeled using the Delany-Bazley empirical material model, and we also investigate the effects of some interpolation functions on optimal material distributions. Applying the present ATO approach, we found some novel noise barriers optimized for various geometric and environmental conditions.

  13. The Optimal Treatment Approach to Needs Assessment.

    ERIC Educational Resources Information Center

    Cox, Gary B.; And Others

    1979-01-01

    The Optimal Treatment approach to needs assessment consists of comparing the most desirable set of services for a client with the services actually received. Discrepancies due to unavailable resources are noted and aggregated across clients. Advantages and disadvantages of this and other needs assessment procedures are considered. (Author/RL)

  14. Quantum Resonance Approach to Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  15. Steam distribution and energy delivery optimization using wireless sensors

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Djouadi, Seddik M; Lake, Joe E

    2011-01-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  16. Optimization of b-value distribution for biexponential diffusion-weighted MR imaging of normal prostate.

    PubMed

    Jambor, Ivan; Merisaari, Harri; Aronen, Hannu J; Järvinen, Jukka; Saunavaara, Jani; Kauko, Tommi; Borra, Ronald; Pesola, Marko

    2014-05-01

    To determine the optimal b-value distribution for biexponential diffusion-weighted imaging (DWI) of normal prostate using both a computer modeling approach and in vivo measurements. Optimal b-value distributions for the fit of three parameters (fast diffusion Df, slow diffusion Ds, and fraction of fast diffusion f) were determined using Monte-Carlo simulations. The optimal b-value distribution was calculated using four individual optimization methods. Eight healthy volunteers underwent four repeated 3 Tesla prostate DWI scans using both 16 equally distributed b-values and an optimized b-value distribution obtained from the simulations. The b-value distributions were compared in terms of measurement reliability and repeatability using Shrout-Fleiss analysis. Using low noise levels, the optimal b-value distribution formed three separate clusters at low (0-400 s/mm2), mid-range (650-1200 s/mm2), and high b-values (1700-2000 s/mm2). Higher noise levels resulted into less pronounced clustering of b-values. The clustered optimized b-value distribution demonstrated better measurement reliability and repeatability in Shrout-Fleiss analysis compared with 16 equally distributed b-values. The optimal b-value distribution was found to be a clustered distribution with b-values concentrated in the low, mid, and high ranges and was shown to improve the estimation quality of biexponential DWI parameters of in vivo experiments. Copyright © 2013 Wiley Periodicals, Inc.

  17. Optimizing Mexico’s Water Distribution Services

    DTIC Science & Technology

    2011-10-28

    distribution of federal subsidies to the states and municipalities.13 The principal financial lender for Mexican infrastructure projects is the...S) 12. DISTRIBUTION / AVAILABILITY STATEMENT For Example: Distribution Statement A: Approved for public release; Distribution is unlimited...and projected needs. Central to the current problem is insufficient financial capital to fully implement strategic modernization plans. This

  18. Multidisciplinary Approach to Linear Aerospike Nozzle Optimization

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.

  19. Cancer Behavior: An Optimal Control Approach

    PubMed Central

    Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.

    2009-01-01

    With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736

  20. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  1. Distributed Adaptive Particle Swarm Optimizer in Dynamic Environment

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2007-01-01

    In the real world, we have to frequently deal with searching and tracking an optimal solution in a dynamical and noisy environment. This demands that the algorithm not only find the optimal solution but also track the trajectory of the changing solution. Particle Swarm Optimization (PSO) is a population-based stochastic optimization technique, which can find an optimal, or near optimal, solution to a numerical and qualitative problem. In PSO algorithm, the problem solution emerges from the interactions between many simple individual agents called particles, which make PSO an inherently distributed algorithm. However, the traditional PSO algorithm lacks the ability to track the optimal solution in a dynamic and noisy environment. In this paper, we present a distributed adaptive PSO (DAPSO) algorithm that can be used for tracking a non-stationary optimal solution in a dynamically changing and noisy environment.

  2. Optimization approaches for planning external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  3. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  4. Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.

    PubMed

    Wang, Xinghu; Hong, Yiguang; Ji, Haibo

    2016-07-01

    The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.

  5. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  6. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  7. Collocation points distributions for optimal spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Fumenti, Federico; Circi, Christian; Romagnoli, Daniele

    2013-03-01

    The method of direct collocation with nonlinear programming (DCNLP) is a powerful tool to solve optimal control problems (OCP). In this method the solution time history is approximated with piecewise polynomials, which are constructed using interpolation points deriving from the Jacobi polynomials. Among the Jacobi polynomials family, Legendre and Chebyshev polynomials are the most used, but there is no evidence that they offer the best performance with respect to other family members. By solving different OCPs with interpolation points not only taken within the Jacoby family, the behavior of the Jacobi polynomials in the optimization problems is discussed. This paper focuses on spacecraft trajectories optimization problems. In particular orbit transfers, interplanetary transfers and station keepings are considered.

  8. Optimization of Power Distribution Networks in Megacities

    NASA Astrophysics Data System (ADS)

    Manusov, V. Z.; Matrenin, P. V.; Ahyoev, J. S.; Atabaeva, L. Sh

    2017-06-01

    The study deals with the problem of city electrical networks optimization in big towns and megacities to increase electrical energy quality and decrease real and active power losses in the networks as well as in domestic consumers. The optimization is carried out according to the location selection and separate reactive power source in 10 kW networks of Swarm Intelligence algorithms, in particular, of Particle Swarm one. The problem solution based on Particle Swarm algorithm is determined by variables being discrete quantities and, in addition, there are several local minimums (troughs) to be available for a global minimum to be found. It is proved that the city power supply system optimization is carried out by the additional reactive power source to be installed at consumers location reducing reactive power flow, thereby, ensuring increase of power supply system quality and decrease of power losses in city networks.

  9. Optimizing IT Infrastructure by Virtualization Approach

    NASA Astrophysics Data System (ADS)

    Budiman, Thomas; Suroso, Jarot S.

    2017-04-01

    The goal of this paper is to get the best potential configuration which can be applied to a physical server without compromising service performance for the clients. Data were compiled by direct observation in the data center observed. Data was then analyzed using the hermeneutics approach to understand the condition by textual data gathered understanding. The results would be the best configuration for a physical server which contains several virtual machines logically separated by its functions. It can be concluded that indeed one physical server machine can be optimized using virtualization so that it may deliver the peak performance of the machine itself and the impact are throughout the organization.

  10. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  11. Optimal distributions for multiplex logistic networks.

    PubMed

    Solá Conde, Luis E; Used, Javier; Romance, Miguel

    2016-06-01

    This paper presents some mathematical models for distribution of goods in logistic networks based on spectral analysis of complex networks. Given a steady distribution of a finished product, some numerical algorithms are presented for computing the weights in a multiplex logistic network that reach the equilibrium dynamics with high convergence rate. As an application, the logistic networks of Germany and Spain are analyzed in terms of their convergence rates.

  12. A Simulation Optimization Approach to Epidemic Forecasting

    PubMed Central

    Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  13. Optimality of nitrogen distribution among leaves in plant canopies.

    PubMed

    Hikosaka, Kouki

    2016-05-01

    The vertical gradient of the leaf nitrogen content in a plant canopy is one of the determinants of vegetation productivity. The ecological significance of the nitrogen distribution in plant canopies has been discussed in relation to its optimality; nitrogen distribution in actual plant canopies is close to but always less steep than the optimal distribution that maximizes canopy photosynthesis. In this paper, I review the optimality of nitrogen distribution within canopies focusing on recent advancements. Although the optimal nitrogen distribution has been believed to be proportional to the light gradient in the canopy, this rule holds only when diffuse light is considered; the optimal distribution is steeper when the direct light is considered. A recent meta-analysis has shown that the nitrogen gradient is similar between herbaceous and tree canopies when it is expressed as the function of the light gradient. Various hypotheses have been proposed to explain why nitrogen distribution is suboptimal. However, hypotheses explain patterns observed in some specific stands but not in others; there seems to be no general hypothesis that can explain the nitrogen distributions under different conditions. Therefore, how the nitrogen distribution in canopies is determined remains open for future studies; its understanding should contribute to the correct prediction and improvement of plant productivity under changing environments.

  14. Inversion of generalized relaxation time distributions with optimized damping parameter

    NASA Astrophysics Data System (ADS)

    Florsch, Nicolas; Revil, André; Camerlynck, Christian

    2014-10-01

    Retrieving the Relaxation Time Distribution (RDT), the Grains Size Distribution (GSD) or the Pore Size Distribution (PSD) from low-frequency impedance spectra is a major goal in geophysics. The “Generalized RTD” generalizes parametric models like Cole-Cole and many others, but remains tricky to invert since this inverse problem is ill-posed. We propose to use generalized relaxation basis function (for instance by decomposing the spectra on basis of generalized Cole-Cole relaxation elements instead of the classical Debye basis) and to use the L-curve approach to optimize the damping parameter required to get smooth and realistic inverse solutions. We apply our algorithm to three examples, one synthetic and two real data sets, and the program includes the possibility of converting the RTD into GSD or PSD by choosing the value of the constant connecting the relaxation time to the characteristic polarization size of interest. A high frequencies (typically above 1 kHz), a dielectric term in taken into account in the model. The code is provided as an open Matlab source as a supplementary file associated with this paper.

  15. Parallel Harmony Search Based Distributed Energy Resource Optimization

    SciTech Connect

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  16. A Decentralized Variable Ordering Method for Distributed Constraint Optimization

    DTIC Science & Technology

    2005-05-01

    Either aproach can be used depending on how densely the nodes are connected inside blocks. If inside-block connec- tivity is sparse, the latter method ...A Decentralized Variable Ordering Method for Distributed Constraint Optimization Anton Chechetka Katia Sycara CMU-RI-TR-05-18 May 2005 Robotics...00-00-2005 4. TITLE AND SUBTITLE A Decentralized Variable Ordering Method for Distributed Constraint Optimization 5a. CONTRACT NUMBER 5b. GRANT

  17. Energy optimization of water distribution system

    SciTech Connect

    Not Available

    1993-02-01

    In order to analyze pump operating scenarios for the system with the computer model, information on existing pumping equipment and the distribution system was collected. The information includes the following: component description and design criteria for line booster stations, booster stations with reservoirs, and high lift pumps at the water treatment plants; daily operations data for 1988; annual reports from fiscal year 1987/1988 to fiscal year 1991/1992; and a 1985 calibrated KYPIPE computer model of DWSD`s water distribution system which included input data for the maximum hour and average day demands on the system for that year. This information has been used to produce the inventory database of the system and will be used to develop the computer program to analyze the system.

  18. Optimal Power Schedule for Distributed MIMO Links

    DTIC Science & Technology

    2006-11-01

    INTRODUCTION A multiple - input multiple - output ( MIMO ) wireless link is well known to provide a much higher capacity than a single- input single- output ...However, for a wireless network of multiple distributed MIMO links, such as a network in the future combat sys- tems, there are new issues of research...The existing MIMO theory is not sufficient for such a wireless network where multiple MIMO links cause mutual interferences to each other. This

  19. A flow path model for regional water distribution optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.

    2009-09-01

    We develop a flow path model for the optimization of a regional water distribution system. The model simultaneously describes a water distribution system in two parts: (1) the water delivery relationship between suppliers and receivers and (2) the physical water delivery network. In the first part, the model considers waters from different suppliers as multiple commodities. This helps the model clearly describe water deliveries by identifying the relationship between suppliers and receivers. The physical part characterizes a physical water distribution network by all possible flow paths. The flow path model can be used to optimize not only the suppliers to each receiver but also their associated flow paths for supplying water. This characteristic leads to the optimum solution that contains the optimal scheduling results and detailed information concerning water distribution in the physical system. That is, the water rights owner, water quantity, water location, and associated flow path of each delivery action are represented explicitly in the results rather than merely as an optimized total flow quantity in each arc of a distribution network. We first verify the proposed methodology on a hypothetical water distribution system. Then we apply the methodology to the water distribution system associated with the Tou-Qian River basin in northern Taiwan. The results show that the flow path model can be used to optimize the quantity of each water delivery, the associated flow path, and the water trade and transfer strategy.

  20. Optimal design of distributed wastewater treatment networks

    SciTech Connect

    Galan, B.; Grossmann, I.E.

    1998-10-01

    This paper deals with the optimum design of a distributed wastewater network where multicomponent streams are considered that are to be processed by units for reducing the concentration of several contaminants. The proposed model gives rise to a nonconvex nonlinear problem which often exhibits local minima and causes convergence difficulties. A search procedure is proposed in this paper that is based on the successive solution of a relaxed linear model and the original nonconvex nonlinear problem. Several examples are presented to illustrate that the proposed method often yields global or near global optimum solutions. The model is also extended for selecting different treatment technologies and for handling membrane separation modules.

  1. A generalized flow path model for water distribution optimization

    NASA Astrophysics Data System (ADS)

    Hsu, N.; Cheng, W.; Yeh, W. W.

    2008-12-01

    A generalized flow path model is developed for optimizing a water distribution system. The model simultaneously describes a water distribution system in two parts: (1) the water delivery relationships between suppliers and receivers and (2) the physical water delivery system. In the first part, the model considers waters from different suppliers as multiple commodities. This helps the model to clearly describe water deliveries by identifying the relationships between suppliers and receivers. The second part characterizes a physical water distribution network by all possible flow paths. The advantages of the proposed model are that: (1) it is a generalized methodology to optimize water distribution, delivery scheduling, water trade, water transfer, and water exchange under existing reservoir operation rules, contracts, and agreements; (2) it can consider water as multiple commodities if needed; and (3) no simplifications are made for either the physical system or the delivery relationships. The model can be used as a tool for decision making for scheduling optimization. The model optimizes not only the suppliers to each receiver but also their associated flow paths for supplying water. This characteristic leads to the optimum solution that contains the optimal scheduling results and detailed information of water distribution in the physical system. That is, the water right owner, water quantity and its associated flow path of each delivery action are represented explicitly in the results rather than merely an optimized total flow quantity in each arc of a distribution network. The proposed model is first verified by a hypothetical water distribution system. Then, the model is applied to the water distribution system of the Tou-Qian River Basin in northern Taiwan. The results show that the flow path model has the ability to optimize the quantity of each water delivery, the associated flow paths of the delivery, and the strategies of water transfer while considering

  2. Optimal Reward Functions in Distributed Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan

    2000-01-01

    We consider the design of multi-agent systems so as to optimize an overall world utility function when (1) those systems lack centralized communication and control, and (2) each agents runs a distinct Reinforcement Learning (RL) algorithm. A crucial issue in such design problems is to initialize/update each agent's private utility function, so as to induce best possible world utility. Traditional 'team game' solutions to this problem sidestep this issue and simply assign to each agent the world utility as its private utility function. In previous work we used the 'Collective Intelligence' framework to derive a better choice of private utility functions, one that results in world utility performance up to orders of magnitude superior to that ensuing from use of the team game utility. In this paper we extend these results. We derive the general class of private utility functions that both are easy for the individual agents to learn and that, if learned well, result in high world utility. We demonstrate experimentally that using these new utility functions can result in significantly improved performance over that of our previously proposed utility, over and above that previous utility's superiority to the conventional team game utility.

  3. Distributed optimization of multi-class SVMs

    PubMed Central

    Dogan, Urun; Kloft, Marius

    2017-01-01

    Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations (Lee et al. and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data. PMID:28570703

  4. Distribution system power flow analysis; A rigid approach

    SciTech Connect

    Chen, T.H.; Chen, M.S.; Hwang, K.J. . Energy Systems Research Center); Kotas, P.; Chebli, E.A. )

    1991-07-01

    This paper introduces a rigid approach to three-phase distribution power flow analysis for large-scale distribution systems. This approach is oriented toward applications in distribution system operational analysis rather than planning analysis. This difference should be properly emphasized, otherwise, the misuse of the planning-type method to analyze the operational behavior of the system will distort the explanation of the calculated results and lead to incorrect conclusions. The solution method is the optimally ordered triangular factorization Y{sub Bus} Method (implicit Z{sub Bus} Gauss Method) which not only takes advantage of the sparsity of system equations but also has very good convergence characteristics on distribution problems. Detailed component models and suitable solution techniques are the essence of an accurate simulation. Detailed component models, therefore, are needed for all system components in the simulation. Utilizing the phase frame representation for all network elements, a program, entitled Generalized Distribution Analysis Systems - GDAS, with a number of features and capabilities not found in existing packages has been developed for large-scale distribution system simulations. The system being analyzed can be balanced or unbalanced and can be a radial, network, or mixed type distribution system. Furthermore, because the individual phase representation is employed for both system and component models, the system can comprise single, double, and three-phase systems simultaneously. Additionally, with detailed component models, the program can also perform system loss and contingency analyses.

  5. Optimal placement and sizing of wind / solar based DG sources in distribution system

    NASA Astrophysics Data System (ADS)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  6. Factorization and the synthesis of optimal feedback gains for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E.

    1990-01-01

    An approach based on Volterra factorization leads to a new methodology for the analysis and synthesis of the optimal feedback gain in the finite-time linear quadratic control problem for distributed parameter systems. The approach circumvents the need for solving and analyzing Riccati equations and provides a more transparent connection between the system dynamics and the optimal gain. The general results are further extended and specialized for the case where the underlying state is characterized by autonomous differential-delay dynamics. Numerical examples are given to illustrate the second-order convergence rate that is derived for an approximation scheme for the optimal feedback gain in the differential-delay problem.

  7. A two-stage sequential linear programming approach to IMRT dose optimization

    PubMed Central

    Zhang, Hao H; Meyer, Robert R; Wu, Jianzhou; Naqvi, Shahid A; Shi, Leyuan; D’Souza, Warren D

    2010-01-01

    The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte-Carlo based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically. PMID:20071764

  8. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  9. Distributed Coordination for Optimal Energy Generation and Distribution in Cyber-Physical Energy Networks.

    PubMed

    Ahn, Hyo-Sung; Kim, Byeong-Yeon; Lim, Young-Hun; Lee, Byung-Hun; Oh, Kwang-Kyo

    2017-02-23

    This paper proposes three coordination laws for optimal energy generation and distribution in energy network, which is composed of physical flow layer and cyber communication layer. The physical energy flows through the physical layer; but all the energies are coordinated to generate and flow by distributed coordination algorithms on the basis of communication information. First, distributed energy generation and energy distribution laws are proposed in a decoupled manner without considering the interactive characteristics between the energy generation and energy distribution. Second, a joint coordination law to treat the energy generation and energy distribution in a coupled manner taking account of the interactive characteristics is designed. Third, to handle over- or less-energy generation cases, an energy distribution law for networks with batteries is designed. The coordination laws proposed in this paper are fully distributed in the sense that they are decided optimally only using relative information among neighboring nodes. Through numerical simulations, the validity of the proposed distributed coordination laws is illustrated.

  10. Nearly optimal quantum control: an analytical approach

    NASA Astrophysics Data System (ADS)

    Sun, Chen; Saxena, Avadh; Sinitsyn, Nikolai A.

    2017-09-01

    We propose nearly optimal control strategies for changing the states of a quantum system. We argue that quantum control optimization can be studied analytically within some protocol families that depend on a small set of parameters for optimization. This optimization strategy can be preferred in practice because it is physically transparent and does not lead to combinatorial complexity in multistate problems. As a demonstration, we design optimized control protocols that achieve switching between orthogonal states of a naturally biased quantum two-level system.

  11. Optimizing pharmaceutical reimbursement: one institution's approach.

    PubMed

    Loyd, Laurel M

    2006-11-01

    The importance of understanding the revenue cycle, reviewing the billing system for errors, and collaborating with other health system departments in maximizing pharmaceutical reimbursement, and the approach used at a large academic medical center to justify a reimbursement specialist and achieve this goal are discussed. Understanding the revenue cycle may enable pharmacy departments to make wise decisions about programs and services that maximize revenue recovery and meet patient needs. Parts of the revenue cycle that pharmacists can have a favorable effect on include claim denials/payment variances, regulatory changes, compliance, contracting, and price setting. Pharmaceutical reimbursement was increased substantially at one institution through a collaborative effort involving multiple departments and a reimbursement specialist who analyzed the revenue cycle, reviewed billing systems, and took steps to avoid or correct billing errors. Collaborating with members of key health system departments can help identify and resolve billing system errors that diminish revenue. Documenting efforts to increase revenue recovery can help justify adding personnel dedicated to reimbursement matters. Analyzing the revenue cycle can contribute to wise decision-making that optimizes pharmaceutical reimbursement.

  12. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  13. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  14. Multi-objective optimal dispatch of distributed energy resources

    NASA Astrophysics Data System (ADS)

    Longe, Ayomide

    This thesis is composed of two papers which investigate the optimal dispatch for distributed energy resources. In the first paper, an economic dispatch problem for a community microgrid is studied. In this microgrid, each agent pursues an economic dispatch for its personal resources. In addition, each agent is capable of trading electricity with other agents through a local energy market. In this paper, a simple market structure is introduced as a framework for energy trades in a small community microgrid such as the Solar Village. It was found that both sellers and buyers benefited by participating in this market. In the second paper, Semidefinite Programming (SDP) for convex relaxation of power flow equations is used for optimal active and reactive dispatch for Distributed Energy Resources (DER). Various objective functions including voltage regulation, reduced transmission line power losses, and minimized reactive power charges for a microgrid are introduced. Combinations of these goals are attained by solving a multiobjective optimization for the proposed ORPD problem. Also, both centralized and distributed versions of this optimal dispatch are investigated. It was found that SDP made the optimal dispatch faster and distributed solution allowed for scalability.

  15. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  16. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-04-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  17. Intensity distribution angular shaping - Practical approach for 3D optical beamforming

    NASA Astrophysics Data System (ADS)

    Wojtanowski, Jacek; Traczyk, Maciej; Zygmunt, Marek; Mierczyk, Zygmunt; Knysak, Piotr; Drozd, Tadeusz

    2014-12-01

    We present approach of optical design which enables to obtain aspheric lens shape optimized for providing the specific light power density distribution in space. Proposed method is based on the evaluation of corresponding angular intensity distribution which can be obtained by the decomposition of the desired spatial distribution into virtual light cones set and collapsing it to the equivalent angular fingerprint. Rigorous formulas have been derived to relate refractive aspheric shape and the corresponding intensity distribution which is used for lens optimization. Algorithms of modeling and optimization were implemented in Matlab© and the calculated designs were successfully tested in Zemax environment.

  18. H-Infinity-Optimal Control for Distributed Parameter Systems

    DTIC Science & Technology

    1991-02-28

    F. Callier and C.A. Desoer , "An Algebra of Transfer Functions for Distributed Linear Time-Invariant Systems," IEEE Trans. Circuits Syst., Sept. 1978...neeuey and -f by blog* nu"bM) This report describes progress in the development and application of H-infinity-optimal control theory to distributed...parameter systems. This research is intended to develop both theory and algorithms capable of providing realistic control systems for physical plants which

  19. Distributed Optimal Power and Rate Control in Wireless Sensor Networks

    PubMed Central

    Tang, Meiqin; Bai, Jianyong; Li, Jing; Xin, Yalin

    2014-01-01

    With the rapid development of wireless sensor networks, reducing energy consumption is becoming one of the important factors to extend node lifetime, and it is necessary to adjust the launching power of each node because of the limited energy available to the sensor nodes in the networks. This paper proposes a power and rate control model based on the network utility maximization (NUM) framework, where a weighting factor is used to reflect the influence degree of the sending power and transmission rate to the utility function. In real networks, nodes interfere with each other in the procedure of transmitting signal, which may lead to signal transmission failure and may negatively have impacts on networks throughput. Using dual decomposition techniques, the NUM problem is decomposed into two distributed subproblems, and then the conjugate gradient method is applied to solve the optimization problem with the calculation of the Hessian matrix and its inverse in order to guarantee fast convergence of the algorithm. The convergence proof is also provided in this paper. Numerical examples show that the proposed solution achieves significant throughput compared with exiting approaches. PMID:24895654

  20. Distributed optimal power and rate control in wireless sensor networks.

    PubMed

    Tang, Meiqin; Bai, Jianyong; Li, Jing; Xin, Yalin

    2014-01-01

    With the rapid development of wireless sensor networks, reducing energy consumption is becoming one of the important factors to extend node lifetime, and it is necessary to adjust the launching power of each node because of the limited energy available to the sensor nodes in the networks. This paper proposes a power and rate control model based on the network utility maximization (NUM) framework, where a weighting factor is used to reflect the influence degree of the sending power and transmission rate to the utility function. In real networks, nodes interfere with each other in the procedure of transmitting signal, which may lead to signal transmission failure and may negatively have impacts on networks throughput. Using dual decomposition techniques, the NUM problem is decomposed into two distributed subproblems, and then the conjugate gradient method is applied to solve the optimization problem with the calculation of the Hessian matrix and its inverse in order to guarantee fast convergence of the algorithm. The convergence proof is also provided in this paper. Numerical examples show that the proposed solution achieves significant throughput compared with exiting approaches.

  1. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2016-06-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  2. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2017-04-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  3. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  4. A Collective Neurodynamic Approach to Constrained Global Optimization.

    PubMed

    Yan, Zheng; Fan, Jianchao; Wang, Jun

    2016-04-01

    Global optimization is a long-lasting research topic in the field of optimization, posting many challenging theoretic and computational issues. This paper presents a novel collective neurodynamic method for solving constrained global optimization problems. At first, a one-layer recurrent neural network (RNN) is presented for searching the Karush-Kuhn-Tucker points of the optimization problem under study. Next, a collective neuroydnamic optimization approach is developed by emulating the paradigm of brainstorming. Multiple RNNs are exploited cooperatively to search for the global optimal solutions in a framework of particle swarm optimization. Each RNN carries out a precise local search and converges to a candidate solution according to its own neurodynamics. The neuronal state of each neural network is repetitively reset by exchanging historical information of each individual network and the entire group. Wavelet mutation is performed to avoid prematurity, add diversity, and promote global convergence. It is proved in the framework of stochastic optimization that the proposed collective neurodynamic approach is capable of computing the global optimal solutions with probability one provided that a sufficiently large number of neural networks are utilized. The essence of the collective neurodynamic optimization approach lies in its potential to solve constrained global optimization problems in real time. The effectiveness and characteristics of the proposed approach are illustrated by using benchmark optimization problems.

  5. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  6. Examining the Bernstein global optimization approach to optimal power flow problem

    NASA Astrophysics Data System (ADS)

    Patil, Bhagyesh V.; Sampath, L. P. M. I.; Krishnan, Ashok; Ling, K. V.; Gooi, H. B.

    2016-10-01

    This work addresses a nonconvex optimal power flow problem (OPF). We introduce a `new approach' in the context of OPF problem based on the Bernstein polynomials. The applicability of the approach is studied on a real-world 3-bus power system. The numerical results obtained with this new approach for a 3-bus system reveal a satisfactory improvement in terms of optimality. The results are found to be competent with generic global optimization solvers BARON and COUENNE.

  7. Optimal cloning for finite distributions of coherent states

    SciTech Connect

    Cochrane, P.T.; Ralph, T.C.; Dolinska, A.

    2004-04-01

    We derive optimal cloning limits for finite Gaussian distributions of coherent states and describe techniques for achieving them. We discuss the relation of these limits to state estimation and the no-cloning limit in teleportation. A qualitatively different cloning limit is derived for a single-quadrature Gaussian quantum cloner.

  8. Stochastic Optimal Control and Linear Programming Approach

    SciTech Connect

    Buckdahn, R.; Goreac, D.; Quincampoix, M.

    2011-04-15

    We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.

  9. Making Big Data, Safe Data: A Test Optimization Approach

    DTIC Science & Technology

    2016-06-15

    Test Optimization Approach Acquisition Research Program Graduate School of Business & Public Policy Naval Postgraduate School The research presented...Making Big Data, Safe Data: A Test Optimization Approach 15 June 2016 Ricardo Valerdi, Associate Professor Eddie Enhelder University of Arizona...potential knowledge gained about a complex system when performing robustness testing and faced with a set of constraints. In particular, this project was

  10. Group Counseling Optimization: A Novel Approach

    NASA Astrophysics Data System (ADS)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  11. Optimal irregular microphone distributions with enhanced beamforming performance in immersive environments.

    PubMed

    Yu, Jingjing; Donohue, Kevin D

    2013-09-01

    Complex relationships between array gain patterns and microphone distributions limit the application of optimization algorithms on irregular arrays. This paper proposes a Genetic Algorithm (GA) for microphone array optimization in immersive (near-field) environments. Geometric descriptors for irregular arrays are proposed for use as objective functions to reduce optimization time by circumventing the need for direct array gain computations. In addition, probabilistic descriptions of acoustic scenes are introduced for incorporating prior knowledge of the source distribution. To verify the effectiveness of the proposed optimization, signal-to-noise ratios are compared for GA-optimized arrays, regular arrays, and arrays optimized through direct exhaustive simulations. Results show enhancements for GA-optimized arrays over arbitrary randomly generated arrays and regular arrays, especially at low microphone densities where placement becomes critical. Design parameters for the GA are identified for improving optimization robustness for different applications. The rapid convergence and acceptable processing times observed during the experiments establish the feasibility of this approach for optimizing array geometries in immersive environments where rapid deployment is required with limited knowledge of the acoustic scene, such as in mobile platforms and audio surveillance applications.

  12. Simulation based flow distribution network optimization for vacuum assisted resin transfer moulding process

    NASA Astrophysics Data System (ADS)

    Hsiao, Kuang-Ting; Devillard, Mathieu; Advani, Suresh G.

    2004-05-01

    In the vacuum assisted resin transfer moulding (VARTM) process, using a flow distribution network such as flow channels and high permeability fabrics can accelerate the resin infiltration of the fibre reinforcement during the manufacture of composite parts. The flow distribution network significantly influences the fill time and fill pattern and is essential for the process design. The current practice has been to cover the top surface of the fibre preform with the distribution media with the hope that the resin will flood the top surface immediately and penetrate through the thickness. However, this approach has some drawbacks. One is when the resin finds its way to the vent before it has penetrated the preform entirely, which results in a defective part or resin wastage. Also, if the composite structure contains ribs or inserts, this approach invariably results in dry spots. Instead of this intuitive approach, we propose a science-based approach to design the layout of the distribution network. Our approach uses flow simulation of the resin into the network and the preform and a genetic algorithm to optimize the flow distribution network. An experimental case study of a co-cured rib structure is conducted to demonstrate the design procedure and validate the optimized flow distribution network design. Good agreement between the flow simulations and the experimental results was observed. It was found that the proposed design algorithm effectively optimized the flow distribution network of the part considered in our case study and hence should prove to be a useful tool to extend the VARTM process to manufacture of complex structures with effective use of the distribution network layup.

  13. A linear programming approach for optimal contrast-tone mapping.

    PubMed

    Wu, Xiaolin

    2011-05-01

    This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping. In a fundamental departure from the current practice of histogram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants.

  14. Consensus+Innovations Distributed Kalman Filter With Optimized Gains

    NASA Astrophysics Data System (ADS)

    Das, Subhro; Moura, Jose M. F.

    2017-01-01

    In this paper, we address the distributed filtering and prediction of time-varying random fields represented by linear time-invariant (LTI) dynamical systems. The field is observed by a sparsely connected network of agents/sensors collaborating among themselves. We develop a Kalman filter type consensus+innovations distributed linear estimator of the dynamic field termed as Consensus+Innovations Kalman Filter. We analyze the convergence properties of this distributed estimator. We prove that the mean-squared error of the estimator asymptotically converges if the degree of instability of the field dynamics is within a pre-specified threshold defined as tracking capacity of the estimator. The tracking capacity is a function of the local observation models and the agent communication network. We design the optimal consensus and innovation gain matrices yielding distributed estimates with minimized mean-squared error. Through numerical evaluations, we show that, the distributed estimator with optimal gains converges faster and with approximately 3dB better mean-squared error performance than previous distributed estimators.

  15. Optimization of an Aeroservoelastic Wing with Distributed Multiple Control Surfaces

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.

    2015-01-01

    This paper considers the aeroelastic optimization of a subsonic transport wingbox under a variety of static and dynamic aeroelastic constraints. Three types of design variables are utilized: structural variables (skin thickness, stiffener details), the quasi-steady deflection scheduling of a series of control surfaces distributed along the trailing edge for maneuver load alleviation and trim attainment, and the design details of an LQR controller, which commands oscillatory hinge moments into those same control surfaces. Optimization problems are solved where a closed loop flutter constraint is forced to satisfy the required flight margin, and mass reduction benefits are realized by relaxing the open loop flutter requirements.

  16. Optimal load distribution between units in a power plant.

    PubMed

    Bortoni, Edson C; Bastos, Guilherme S; Souza, Luiz E

    2007-10-01

    This paper presents a strategy for load distribution between the generating units in hydro power plants. The objective is to reach the maximum energy conversion efficiency for a given dispatched power. The developed tool employs a heuristic-based combinatorial optimization technique in conjunction with a set of system variables measurement allowing real-time load sharing. The developed equipment is used to give online energy conversion efficiency from each unit of the power plant. No specific previous information about the efficiency of system components is required. Simulation results of the proposed optimization technique when applied to typical hydro power plant data are presented.

  17. Applications of the theory of optimal control of distributed-parameter systems to structural optimization

    NASA Technical Reports Server (NTRS)

    Armand, J. P.

    1972-01-01

    An extension of classical methods of optimal control theory for systems described by ordinary differential equations to distributed-parameter systems described by partial differential equations is presented. An application is given involving the minimum-mass design of a simply-supported shear plate with a fixed fundamental frequency of vibration. An optimal plate thickness distribution in analytical form is found. The case of a minimum-mass design of an elastic sandwich plate whose fundamental frequency of free vibration is fixed. Under the most general conditions, the optimization problem reduces to the solution of two simultaneous partial differential equations involving the optimal thickness distribution and the modal displacement. One equation is the uniform energy distribution expression which was found by Ashley and McIntosh for the optimal design of one-dimensional structures with frequency constraints, and by Prager and Taylor for various design criteria in one and two dimensions. The second equation requires dynamic equilibrium at the preassigned vibration frequency.

  18. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  19. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  20. Distributed design approach in persistent identifiers systems

    NASA Astrophysics Data System (ADS)

    Golodoniuc, Pavel; Car, Nicholas; Klump, Jens

    2017-04-01

    The need to identify both digital and physical objects is ubiquitous in our society. Past and present persistent identifier (PID) systems, of which there is a great variety in terms of technical and social implementations, have evolved with the advent of the Internet, which has allowed for globally unique and globally resolvable identifiers. PID systems have catered for identifier uniqueness, integrity, persistence, and trustworthiness, regardless of the identifier's application domain, the scope of which has expanded significantly in the past two decades. Since many PID systems have been largely conceived and developed by small communities, or even a single organisation, they have faced challenges in gaining widespread adoption and, most importantly, the ability to survive change of technology. This has left a legacy of identifiers that still exist and are being used but which have lost their resolution service. We believe that one of the causes of once successful PID systems fading is their reliance on a centralised technical infrastructure or a governing authority. Golodoniuc et al. (2016) proposed an approach to the development of PID systems that combines the use of (a) the Handle system, as a distributed system for the registration and first-degree resolution of persistent identifiers, and (b) the PID Service (Golodoniuc et al., 2015), to enable fine-grained resolution to different information object representations. The proposed approach solved the problem of guaranteed first-degree resolution of identifiers, but left fine-grained resolution and information delivery under the control of a single authoritative source, posing risk to the long-term availability of information resources. Herein, we develop these approaches further and explore the potential of large-scale decentralisation at all levels: (i) persistent identifiers and information resources registration; (ii) identifier resolution; and (iii) data delivery. To achieve large-scale decentralisation

  1. Reliability Optimization of Radial Distribution Systems Employing Differential Evolution and Bare Bones Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Kela, K. B.; Arya, L. D.

    2014-09-01

    This paper describes a methodology for determination of optimum failure rate and repair time for each section of a radial distribution system. An objective function in terms of reliability indices and their target values is selected. These indices depend mainly on failure rate and repair time of a section present in a distribution network. A cost is associated with the modification of failure rate and repair time. Hence the objective function is optimized subject to failure rate and repair time of each section of the distribution network considering the total budget allocated to achieve the task. The problem has been solved using differential evolution and bare bones particle swarm optimization. The algorithm has been implemented on a sample radial distribution system.

  2. Universal scaling of optimal current distribution in transportation networks.

    PubMed

    Simini, Filippo; Rinaldo, Andrea; Maritan, Amos

    2009-04-01

    Transportation networks are inevitably selected with reference to their global cost which depends on the strengths and the distribution of the embedded currents. We prove that optimal current distributions for a uniformly injected d -dimensional network exhibit robust scale-invariance properties, independently of the particular cost function considered, as long as it is convex. We find that, in the limit of large currents, the distribution decays as a power law with an exponent equal to (2d-1)/(d-1). The current distribution can be exactly calculated in d=2 for all values of the current. Numerical simulations further suggest that the scaling properties remain unchanged for both random injections and by randomizing the convex cost functions.

  3. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  4. Distributed Optimal Generation Control of Shipboard Power Systems

    DTIC Science & Technology

    2012-05-01

    address the needs of SPSs, a fully-distributed, multi - agent system (MAS)-based solution is proposed to optimize the control references of distributed...Transactions on Power Systems, Vol.27, No.1, pp.233-242, Feb. 2012. [4] J. M. Solanki and N. N. Schulz, “Using intelligent multi - agent systems for shipboard...D 2005/2006, pp. 562-567, May 21-24, 2006. [11] J. A. Momoh, K. Alfred and Y. Xia, “Framework for Multi - Agent System (MAS) Detection and Control

  5. Distributed Optimal Generation Control of Shipboard Power Systems

    DTIC Science & Technology

    2012-05-01

    address the needs of SPSs, a fully-distributed, multi - agent system (MAS)-based solution is proposed to optimize the control references of distributed...Systems, Vol.27, No.1, pp.233-242, Feb. 2012. [4] J. M. Solanki and N. N. Schulz, “Using intelligent multi - agent systems for shipboard power...pp. 562-567, May 21-24, 2006. [11] J. A. Momoh, K. Alfred and Y. Xia, “Framework for Multi - Agent System (MAS) Detection and Control of Arcing of

  6. Optimal eigenvalue computation on distributed-memory MIMD multiprocessors

    SciTech Connect

    Crivelli, S.; Jessup, E. R.

    1992-10-01

    Simon proves that bisection is not the optimal method for computing an eigenvalue on a single vector processor. In this paper, we show that his analysis does not extend in a straightforward way to the computation of an eigenvalue on a distributed-memory MIMD multiprocessor. In particular, we show how the optimal number of sections (and processors) to use for multisection depends on variables such as the matrix size and certain parameters inherent to the machine. We also show that parallel multisection outperforms the variant of parallel bisection proposed by Swarztrauber or this problem on a distributed-memory MIMD multiprocessor. We present the results of experiments on the 64-processor Intel iPSC/2 hypercube and the 512-processor Intel Touchstone Delta mesh multiprocessor.

  7. Distribution function approach to redshift space distortions

    SciTech Connect

    Seljak, Uroš; McDonald, Patrick E-mail: pvmcdonald@lbl.gov

    2011-11-01

    We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansion of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter

  8. Power optimization of random distributed feedback fiber lasers.

    PubMed

    Vatnik, Ilya D; Churkin, Dmitry V; Babin, Sergey A

    2012-12-17

    We present a comprehensive study of power output characteristics of random distributed feedback Raman fiber lasers. The calculated optimal slope efficiency of the backward wave generation in the one-arm configuration is shown to be as high as ~90% for 1 W threshold. Nevertheless, in real applications a presence of a small reflection at fiber ends can appreciably deteriorate the power performance. The developed numerical model well describes the experimental data.

  9. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. © 2016 Elsevier Inc. All rights reserved.

  10. Selection of Reserves for Woodland Caribou Using an Optimization Approach

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation. PMID:22363702

  11. Selection of reserves for woodland caribou using an optimization approach.

    PubMed

    Schneider, Richard R; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation.

  12. An optimal transportation approach for nuclear structure-based pathology

    PubMed Central

    Wang, Wei; Ozolek, John A.; Slepčev, Dejan; Lee, Ann B.; Chen, Cheng; Rohde, Gustavo K.

    2012-01-01

    Nuclear morphology and structure as visualized from histopathology microscopy images can yield important diagnostic clues in some benign and malignant tissue lesions. Precise quantitative information about nuclear structure and morphology, however, is currently not available for many diagnostic challenges. This is due, in part, to the lack of methods to quantify these differences from image data. We describe a method to characterize and contrast the distribution of nuclear structure in different tissue classes (normal, benign, cancer, etc.). The approach is based on quantifying chromatin morphology in different groups of cells using the optimal transportation (Kantorovich-Wasserstein) metric in combination with the Fisher discriminant analysis and multidimensional scaling techniques. We show that the optimal transportation metric is able to measure relevant biological information as it enables automatic determination of the class (e.g. normal vs. cancer) of a set of nuclei. We show that the classification accuracies obtained using this metric are, on average, as good or better than those obtained utilizing a set of previously described numerical features. We apply our methods to two diagnostic challenges for surgical pathology: one in the liver and one in the thyroid. Results automatically computed using this technique show potentially biologically relevant differences in nuclear structure in liver and thyroid cancers. PMID:20977984

  13. Comparison of two spatial optimization techniques: a framework to solve multiobjective land use distribution problems.

    PubMed

    Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon

    2009-02-01

    Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations (Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions ("production of cereals," "resistance to soil erosion by water," and "landscape water retention"). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.

  14. Comparison of Two Spatial Optimization Techniques: A Framework to Solve Multiobjective Land Use Distribution Problems

    NASA Astrophysics Data System (ADS)

    Meyer, Burghard Christian; Lescot, Jean-Marie; Laplana, Ramon

    2009-02-01

    Two spatial optimization approaches, developed from the opposing perspectives of ecological economics and landscape planning and aimed at the definition of new distributions of farming systems and of land use elements, are compared and integrated into a general framework. The first approach, applied to a small river catchment in southwestern France, uses SWAT (Soil and Water Assessment Tool) and a weighted goal programming model in combination with a geographical information system (GIS) for the determination of optimal farming system patterns, based on selected objective functions to minimize deviations from the goals of reducing nitrogen and maintaining income. The second approach, demonstrated in a suburban landscape near Leipzig, Germany, defines a GIS-based predictive habitat model for the search of unfragmented regions suitable for hare populations ( Lepus europaeus), followed by compromise optimization with the aim of planning a new habitat structure distribution for the hare. The multifunctional problem is solved by the integration of the three landscape functions (“production of cereals,” “resistance to soil erosion by water,” and “landscape water retention”). Through the comparison, we propose a framework for the definition of optimal land use patterns based on optimization techniques. The framework includes the main aspects to solve land use distribution problems with the aim of finding the optimal or best land use decisions. It integrates indicators, goals of spatial developments and stakeholders, including weighting, and model tools for the prediction of objective functions and risk assessments. Methodological limits of the uncertainty of data and model outcomes are stressed. The framework clarifies the use of optimization techniques in spatial planning.

  15. Scalar and Multivariate Approaches for Optimal Network Design in Antarctica

    NASA Astrophysics Data System (ADS)

    Hryniw, Natalia

    Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.

  16. A Regression Design Approach to Optimal and Robust Spacing Selection.

    DTIC Science & Technology

    1981-07-01

    release and sale; its distribution is unlimited Acceso For NTIS GRA&I DEPARTMENT OF STATISTICS DTIC TAB Unannounced Southern Methodist University F...such as the Cauchy where A is a constant multiple of the identity. In fact, for the Cauchy distribution asymptotically optimal spacing sequences for

  17. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  18. Optimal Voltage Regulation for Unbalanced Distribution Networks Considering Distributed Energy Resources

    SciTech Connect

    Xu, Yan; Tomsovic, Kevin

    2015-01-01

    With increasing penetration of distributed generation in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative quadratic constrained quadratic programming model to minimize voltage deviations and maximize distributed energy resource (DER) active power output in a three phase unbalanced distribution system is developed. The optimization model is based on the linearized sensitivity coefficients between controlled variables (e.g., node voltages) and control variables (e.g., real and reactive power injections of DERs). To avoid the oscillation of solution when it is close to the optimum, a golden search method is introduced to control the step size. Numerical simulations on modified IEEE 13 nodes test feeders show the efficiency of the proposed model. Compared to the results solved by heuristic search (harmony algorithm), the proposed model converges quickly to the global optimum.

  19. Constrained nonlinear optimization approaches to color-signal separation.

    PubMed

    Chang, P R; Hsieh, T H

    1995-01-01

    Separating a color signal into illumination and surface reflectance components is a fundamental issue in color reproduction and constancy. This can be carried out by minimizing the error in the least squares (LS) fit of the product of the illumination and the surface spectral reflectance to the actual color signal. When taking in account the physical realizability constraints on the surface reflectance and illumination, the feasible solutions to the nonlinear LS problem should satisfy a number of linear inequalities. Four distinct novel optimization algorithms are presented to employ these constraints to minimize the nonlinear LS fitting error. The first approach, which is based on Ritter's superlinear convergent method (Luengerger, 1980), provides a computationally superior algorithm to find the minimum solution to the nonlinear LS error problem subject to linear inequality constraints. Unfortunately, this gradient-like algorithm may sometimes be trapped at a local minimum or become unstable when the parameters involved in the algorithm are not tuned properly. The remaining three methods are based on the stable and promising global minimizer called simulated annealing. The annealing algorithm can always find the global minimum solution with probability one, but its convergence is slow. To tackle this, a cost-effective variable-separable formulation based on the concept of Golub and Pereyra (1973) is adopted to reduce the nonlinear LS problem to be a small-scale nonlinear LS problem. The computational efficiency can be further improved when the original Boltzman generating distribution of the classical annealing is replaced by the Cauchy distribution.

  20. Nonlinear optimization approach for Fourier ptychographic microscopy.

    PubMed

    Zhang, Yongbing; Jiang, Weixin; Dai, Qionghai

    2015-12-28

    Fourier ptychographic microscopy (FPM) is recently proposed as a computational imaging method to bypass the limitation of the space-bandwidth product of the traditional optical system. It employs a sequence of low-resolution images captured under angularly varying illumination and applies the phase retrieval algorithm to iteratively reconstruct a wide-field, high-resolution image. In current FPM imaging system, system uncertainties, such as the pupil aberration of the employed optics, may significantly degrade the quality of the reconstruction. In this paper, we develop and test a nonlinear optimization algorithm to improve the robustness of the FPM imaging system by simultaneously considering the reconstruction and the system imperfections. Analytical expressions for the gradient of a squared-error metric with respect to the object and illumination allow joint optimization of the object and system parameters. The algorithm achieves superior reconstructions when the system parameters are inaccurately known or in the presence of noise and corrects the pupil aberrations simultaneously. Experiments on both synthetic and real captured data validate the effectiveness of the proposed method.

  1. A Communication-Optimal Framework for Contracting Distributed Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-11-16

    Tensor contractions are extremely compute intensive generalized matrix multiplication operations encountered in many computational science fields, such as quantum chemistry and nuclear physics. Unlike distributed matrix multiplication, which has been extensively studied, limited work has been done in understanding distributed tensor contractions. In this paper, we characterize distributed tensor contraction algorithms on torus networks. We develop a framework with three fundamental communication operators to generate communication-efficient contraction algorithms for arbitrary tensor contractions. We show that for a given amount of memory per processor, our framework is communication optimal for all tensor contractions. We demonstrate performance and scalability of our framework on up to 262,144 cores of BG/Q supercomputer using five tensor contraction examples.

  2. Optimization approaches to volumetric modulated arc therapy planning.

    PubMed

    Unkelbach, Jan; Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-01

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  3. Optimization approaches to volumetric modulated arc therapy planning

    SciTech Connect

    Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  4. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE PAGES

    Ding, Tao; Li, Cheng; Huang, Can; ...

    2017-01-09

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  5. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  6. Principled negotiation and distributed optimization for advanced air traffic management

    NASA Astrophysics Data System (ADS)

    Wangermann, John Paul

    Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring

  7. Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-03-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method) with a multiobjective optimization (MOO) approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO) with the traditional Nelder-Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1) the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2) both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash-Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3) evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4) compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization

  8. A hybrid simulation-optimization approach for solving the areal groundwater pollution source identification problems

    NASA Astrophysics Data System (ADS)

    Ayvaz, M. Tamer

    2016-07-01

    In this study, a new simulation-optimization approach is proposed for solving the areal groundwater pollution source identification problems which is an ill-posed inverse problem. In the simulation part of the proposed approach, groundwater flow and pollution transport processes are simulated by modeling the given aquifer system on MODFLOW and MT3DMS models. The developed simulation model is then integrated to a newly proposed hybrid optimization model where a binary genetic algorithm and a generalized reduced gradient method are mutually used. This is a novel approach and it is employed for the first time in the areal pollution source identification problems. The objective of the proposed hybrid optimization approach is to simultaneously identify the spatial distributions and input concentrations of the unknown areal groundwater pollution sources by using the limited number of pollution concentration time series at the monitoring well locations. The applicability of the proposed simulation-optimization approach is evaluated on a hypothetical aquifer model for different pollution source distributions. Furthermore, model performance is evaluated for measurement error conditions, different genetic algorithm parameter combinations, different numbers and locations of the monitoring wells, and different heterogeneous hydraulic conductivity fields. Identified results indicated that the proposed simulation-optimization approach may be an effective way to solve the areal groundwater pollution source identification problems.

  9. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-05

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.

  10. Optimal Strategy of Efficiency Power Plant with Battery Electric Vehicle in Distribution Network

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Su, Su; Li, Shunxin; Wang, Wei; Yang, Tiantian; Li, Mengjuan; Ota, Yutaka

    2017-05-01

    With the popularity of electric vehicles (EVs), such as plug-in electric vehicles (PHEVs) and battery electric vehicles (BEVs), an optimal strategy for the coordination of BEVs charging is proposed in this paper. The proposed approach incorporates the random behaviours and regular behaviours of BEV drivers in urban environment. These behaviours lead to the stochastic nature of the charging demand. The optimal strategy is used to guide the coordinated charging at different time to maximize the efficiency of virtual power plant (VPP). An innovative peer-to-peer system is used with BEVs to achieve the goals. The actual behaviours of vehicles in a campus is used to validate the proposed approach, and the simulation results show that the optimal strategy can not only maximize the utilization ratio of efficiency power plant, but also do not need additional energies from distribution grid.

  11. Optimal design of light distribution of LED luminaries for road lighting

    NASA Astrophysics Data System (ADS)

    Lai, Wei; Chen, Weimin; Liu, Xianming; Lei, Xiaohua

    2011-10-01

    Conventional road lighting luminaries are gradually upgraded by LED luminaries nowadays. It is an urgent problem to design the light distribution of LED luminaries fixed at the former luminaries arrangement position, that are both energysaving and capable of meeting the lighting requirements made by the International Commission on Illumination (CIE). In this paper, a nonlinear optimization approach is proposed for light distribution design of LED road lighting luminaries, in which the average road surface luminance, overall road surface luminance uniformity, longitudinal road surface luminance uniformity, glare and surround ratio specified by CIE are set as constraint conditions to minimize the total luminous flux. The nonlinear problem can be transformed to a linear problem by doing rational equivalent transformation on constraint conditions. A polynomial of cosine function for the illumination distribution on the road is used to make the problem solvable and construct smooth light distribution curves. Taking the C2 class road with five different lighting classes M1 to M5 defined by CIE for example, the most energy-saving light distributions are obtained with the above method. Compared with a sample luminary produced by linear optimization method, the LED luminary with theoretically optimal lighting distribution in the paper can save 40% of the energy at the least.

  12. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  13. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  14. Approaches for Informing Optimal Dose of Behavioral Interventions

    PubMed Central

    King, Heather A.; Maciejewski, Matthew L.; Allen, Kelli D.; Yancy, William S.; Shaffer, Jonathan A.

    2015-01-01

    Background There is little guidance about to how select dose parameter values when designing behavioral interventions. Purpose The purpose of this study is to present approaches to inform intervention duration, frequency, and amount when (1) the investigator has no a priori expectation and is seeking a descriptive approach for identifying and narrowing the universe of dose values or (2) the investigator has an a priori expectation and is seeking validation of this expectation using an inferential approach. Methods Strengths and weaknesses of various approaches are described and illustrated with examples. Results Descriptive approaches include retrospective analysis of data from randomized trials, assessment of perceived optimal dose via prospective surveys or interviews of key stakeholders, and assessment of target patient behavior via prospective, longitudinal, observational studies. Inferential approaches include nonrandomized, early-phase trials and randomized designs. Conclusions By utilizing these approaches, researchers may more efficiently apply resources to identify the optimal values of dose parameters for behavioral interventions. PMID:24722964

  15. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    PubMed

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  16. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  17. Multigrid methods for parabolic distributed optimal control problems

    NASA Astrophysics Data System (ADS)

    Borzì, Alfio

    2003-08-01

    Multigrid schemes that solve parabolic distributed optimality systems discretized by finite differences are investigated. Accuracy properties of finite difference approximation are discussed and validated. Two multigrid methods are considered which are based on a robust relaxation technique and use two different coarsening strategies: semicoarsening and standard coarsening. The resulting multigrid algorithms show robustness with respect to changes of the value of [nu], the weight of the cost of the control, is sufficiently small. Fourier mode analysis is used to investigate the dependence of the linear twogrid convergence factor on [nu] and on the discretization parameters. Results of numerical experiments are reported that demonstrate sharpness of Fourier analysis estimates. A multigrid algorithm that solves optimal control problems with box constraints on the control is considered.

  18. Strategic approaches to optimizing peptide ADME properties.

    PubMed

    Di, Li

    2015-01-01

    Development of peptide drugs is challenging but also quite rewarding. Five blockbuster peptide drugs are currently on the market, and six new peptides received first marketing approval as new molecular entities in 2012. Although peptides only represent 2% of the drug market, the market is growing twice as quickly and might soon occupy a larger niche. Natural peptides typically have poor absorption, distribution, metabolism, and excretion (ADME) properties with rapid clearance, short half-life, low permeability, and sometimes low solubility. Strategies have been developed to improve peptide drugability through enhancing permeability, reducing proteolysis and renal clearance, and prolonging half-life. In vivo, in vitro, and in silico tools are available to evaluate ADME properties of peptides, and structural modification strategies are in place to improve peptide developability.

  19. Stochastic Real-Time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    DTIC Science & Technology

    2011-09-01

    York, NY, 1992. [5] A.V. Savkin, P.N. Pathirana, nd F. Faruqi. The problem of precision missile guidance: LQR and H 00 control frameworks. IEEE...STOCHASTIC REAL-TIME OPTIMAL CONTROL : A PSEUDOSPECTRAL APPROACH FOR BEARING-ONLY TRAJECTORY OPTIMIZATION DISSERTATION Steven M. Ross, Lieutenant...the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENY/11-24 STOCHASTIC REAL-TIME OPTIMAL CONTROL : A

  20. Practical Approaches to Optimize Adolescent Immunization.

    PubMed

    Bernstein, Henry H; Bocchini, Joseph A

    2017-03-01

    With the expansion of the adolescent immunization schedule during the past decade, immunization rates notably vary by vaccine and by state. Addressing barriers to improving adolescent vaccination rates is a priority. Every visit can be viewed as an opportunity to update and complete an adolescent's immunizations. It is essential to continue to focus and refine the appropriate techniques in approaching the adolescent patient and parent in the office setting. Health care providers must continuously strive to educate their patients and develop skills that can help parents and adolescents overcome vaccine hesitancy. Research on strategies to achieve higher vaccination rates is ongoing, and it is important to increase the knowledge and implementation of these strategies. This clinical report focuses on increasing adherence to the universally recommended vaccines in the annual adolescent immunization schedule of the American Academy of Pediatrics, the American Academy of Family Physicians, the Centers for Disease Control and Prevention, and the American Congress of Obstetricians and Gynecologists. This will be accomplished by (1) examining strategies that heighten confidence in immunizations and address patient and parental concerns to promote adolescent immunization and (2) exploring how best to approach the adolescent and family to improve immunization rates. Copyright © 2017 by the American Academy of Pediatrics.

  1. Optimal-control theoretic methods for optimization and regulation of distributed parameter systems

    NASA Astrophysics Data System (ADS)

    Goss, Jennifer Dawn

    Optimal control and optimization of distributed parameter systems are discussed in the context of a common control framework. The adjoint method of optimization and the traditional linear quadratic regulator implementation of optimal control both employ adjoint or costate variables in the determination of control variable progression. As well both theories benefit from a reduced order model approximation in their execution. This research aims to draw clear parallels between optimization and optimal control utilizing these similarities. Several applications are presented showing the use of adjoint/costate variables and reduced order models in optimization and optimal control problems. The adjoint method for shape optimization is derived and implemented for the quasi-one-dimensional duct and two variations of a two-dimensional double ramp inlet. All applications are governed by the Euler equations. The quasi-one-dimensional duct is solved first to test the adjoint method and to verify the results against an analytical solution. The method is then adapted to solve the shape optimization of the double ramp inlet. A finite volume solver is tested on the flow equations and then implemented for the corresponding adjoint equations. The gradient of the cost function with respect to the shape parameters is derived based on the computed adjoint variables. The same inlet shape optimization problem is then solved using a reduced order model. The basis functions in the reduced order model are computed using the method of snapshots form of proper orthogonal decomposition. The corresponding weights are derived using an optimization in the design parameter space to match the reduced order model to the original snapshots. A continuous map of these weights in terms of the design variables is obtained via a response surface approximations and artificial neural networks. This map is then utilized in an optimization problem to determine the optimal inlet shape. As in the adjoint method

  2. Mathematical optimization of matter distribution for a planetary system configuration

    NASA Astrophysics Data System (ADS)

    Morozov, Yegor; Bukhtoyarov, Mikhail

    2016-07-01

    Planetary formation is mostly a random process. When the humanity reaches the point when it can transform planetary systems for the purpose of interstellar life expansion, the optimal distribution of matter in a planetary system will determine its population and expansive potential. Maximization of the planetary system carrying capacity and its potential for the interstellar life expansion depends on planetary sizes, orbits, rotation, chemical composition and other vital parameters. The distribution of planetesimals to achieve maximal carrying capacity of the planets during their life cycle, and maximal potential to inhabit other planetary systems must be calculated comprehensively. Moving much material from one planetary system to another is uneconomic because of the high amounts of energy and time required. Terraforming of the particular planets before the whole planetary system is configured might drastically decrease the potential habitability the whole system. Thus a planetary system is the basic unit for calculations to sustain maximal overall population and expand further. The mathematical model of optimization of matter distribution for a planetary system configuration includes the input observed parameters: the map of material orbiting in the planetary system with specified orbits, masses, sizes, and the chemical compound for each, and the optimized output parameters. The optimized output parameters are sizes, masses, the number of planets, their chemical compound, and masses of the satellites required to make tidal forces. Also the magnetic fields and planetary rotations are crucial, but they will be considered in further versions of this model. The optimization criteria is the maximal carrying capacity plus maximal expansive potential of the planetary system. The maximal carrying capacity means the availability of essential life ingredients on the planetary surface, and the maximal expansive potential means availability of uranium and metals to build

  3. Comparative Properties of Collaborative Optimization and Other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  4. Comparative Properties of Collaborative Optimization and other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  5. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    PubMed

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions.

  6. Distributed Combinatorial Optimization Using Privacy on Mobile Phones

    NASA Astrophysics Data System (ADS)

    Ono, Satoshi; Katayama, Kimihiro; Nakayama, Shigeru

    This paper proposes a method for distributed combinatorial optimization which uses mobile phones as computers. In the proposed method, an ordinary computer generates solution candidates and mobile phones evaluates them by referring privacy — private information and preferences. Users therefore does not have to send their privacy to any other computers and does not have to refrain from inputting their preferences. They therefore can obtain satisfactory solution. Experimental results have showed the proposed method solved room assignment problems without sending users' privacy to a server.

  7. The optimal branching asymmetry of a bidirectional distribution tree

    NASA Astrophysics Data System (ADS)

    Florens, M.; Sapoval, B.; Filoche, M.

    2011-09-01

    Several transportation networks in living systems are pulsatile branching trees. Due to the alternating character of the flow, these trees have to simultaneously satisfy two constraints: they have to deliver the carried products in a limited time and they must exhibit a satisfactory hydrodynamic performance in both directions of the flow. We report here that introducing a systematic branching asymmetry into a distribution tree improves performance and robustness, both at inhalation and exhalation. Moreover, optimizing the asymmetry level for both phases leads to a value very close to the one measured in the human lung.

  8. Optimal design of one-dimensional photonic crystal filters using minimax optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Mohamed, Ahmed S A; Maghrabi, Mahmoud M T; Rafat, Nadia H

    2015-02-20

    In this paper, we introduce a simulation-driven optimization approach for achieving the optimal design of electromagnetic wave (EMW) filters consisting of one-dimensional (1D) multilayer photonic crystal (PC) structures. The PC layers' thicknesses and/or material types are considered as designable parameters. The optimal design problem is formulated as a minimax optimization problem that is entirely solved by making use of readily available software tools. The proposed approach allows for the consideration of problems of higher dimension than usually treated before. In addition, it can proceed starting from bad initial design points. The validity, flexibility, and efficiency of the proposed approach is demonstrated by applying it to obtain the optimal design of two practical examples. The first is (SiC/Ag/SiO(2))(N) wide bandpass optical filter operating in the visible range. Contrarily, the second example is (Ag/SiO(2))(N) EMW low pass spectral filter, working in the infrared range, which is used for enhancing the efficiency of thermophotovoltaic systems. The approach shows a good ability to converge to the optimal solution, for different design specifications, regardless of the starting design point. This ensures that the approach is robust and general enough to be applied for obtaining the optimal design of all 1D photonic crystals promising applications.

  9. Multi-objective optimization with estimation of distribution algorithm in a noisy environment.

    PubMed

    Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah

    2013-01-01

    Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.

  10. Optimized velocity distributions for direct dark matter detection

    NASA Astrophysics Data System (ADS)

    Ibarra, Alejandro; Rappelt, Andreas

    2017-08-01

    We present a method to calculate, without making assumptions about the local dark matter velocity distribution, the maximal and minimal number of signal events in a direct detection experiment given a set of constraints from other direct detection experiments and/or neutrino telescopes. The method also allows to determine the velocity distribution that optimizes the signal rates. We illustrate our method with three concrete applications: i) to derive a halo-independent upper limit on the cross section from a set of null results, ii) to confront in a halo-independent way a detection claim to a set of null results and iii) to assess, in a halo-independent manner, the prospects for detection in a future experiment given a set of current null results.

  11. Approaching word length distribution via level spectra

    NASA Astrophysics Data System (ADS)

    Deng, Weibing; Pato, Mauricio Porto

    2017-09-01

    Treating a text, after the removal of paragraphs and punctuations, as a spectrum of blanks, the distributions of the length of words of ten languages are analyzed. Using models from the statistical theory of spectra, it is found that the ten languages can be classified into two families: one with words that follow a Wigner-like distribution while the words of the other obey a Poisson-like distribution.

  12. Some New Approaches to Multivariate Probability Distributions.

    DTIC Science & Technology

    1986-12-01

    Forte, B. (1985). Mutual dependence of random variables and maximum discretized entropy , Ann. Prob., 13, 630-637. .. 3. Billingsley, P. (1968...characterizations of distributions, such as the Marshall-Olkin bivariate distribution or Frechet’s multi- variate distribution with continuous marginals or a...problem mentioned in Remark 8. He has given in this context a uniqueness theorem in the bivariate case under certain assump- tions. The following

  13. Optimizing Multicompression Approaches to Elasticity Imaging

    PubMed Central

    Du, Huini; Liu, Jie; Pellot-Barakat, Claire; Insana, Michael F.

    2009-01-01

    Breast lesion visibility in static strain imaging ultimately is noise limited. When correlation and related techniques are applied to estimate local displacements between two echo frames recorded before and after a small deformation, target contrast increases linearly with the amount of deformation applied. However, above some deformation threshold, decorrelation noise increases more than contrast such that lesion visibility is severely reduced. Multicompression methods avoid this problem by accumulating displacements from many small deformations to provide the same net increase in lesion contrast as one large deformation but with minimal decorrelation noise. Unfortunately, multicompression approaches accumulate echo noise (electronic and sampling) with each deformation step as contrast builds so that lesion visibility can be reduced again if the applied deformation increment is too small. This paper uses signal models and analysis techniques to develop multicompression strategies that minimize strain image noise. The analysis predicts that displacement variance is minimal in elastically homogeneous media when the applied strain increment is 0.0035. Predictions are verified experimentally with gelatin phantoms. For in vivo breast imaging, a strain increment as low as 0.0015 is recommended for minimum noise because of the greater elastic heterogeneity of breast tissue. PMID:16471435

  14. Topology optimization of magnetic source distributions for diamagnetic and superconducting levitation

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Sergey; Guest, James K.

    2017-09-01

    Topology optimization is used to obtain a magnetic source distribution providing levitation of a diamagnetic body or type I superconductor with maximized thrust force. We show that this technique identifies non-trivial source distributions and may be useful to design devices based on non-contact magnetic suspension and other magnetic devices, such as micro-magneto-mechanical devices, high field magnets etc. Diamagnetic and superconducting suspensions are often used in physical experiments and thus we believe this approach will be interesting to physics community as it may generate non-trivial and often unexpected topologies and may be useful to create new experiments and devices.

  15. A New Distributed Optimization for Community Microgrids Scheduling

    SciTech Connect

    Starke, Michael R; Tomsovic, Kevin

    2017-01-01

    This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling, energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.

  16. Optimal pattern distributions in Rete-based production systems

    NASA Technical Reports Server (NTRS)

    Scott, Stephen L.

    1994-01-01

    Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.

  17. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    PubMed

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia

    2017-06-27

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  18. A multidisciplinary approach to optimization of controlled space structures

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Padula, Sharon L.; Graves, Philip C.; James, Benjamin B.

    1990-01-01

    A fundamental problem facing controls-structures analysts is a means of determining the trade-offs between structural design parameters and control design parameters in meeting some particular performance criteria. Developing a general optimization-based design methodology integrating the disciplines of structural dynamics and controls is a logical approach. The objective of this study is to develop such a method. Classical design methodology involves three phases. The first is structural optimization, wherein structural member sizes are varied to minimize structural mass, subject to open-loop frequency constraints. The next phase integrates control and structure design with control gains as additional design variables. The final phase is analysis of the 'optimal' integrated design phase considering 'real' actuators and 'standard' member sizes. The control gains could be further optimized for fixed structure, and actuator saturation constraints could be imposed. However, such an approach does not take full advantage of opportunities to tailor the structure and control system design as one system.

  19. A simple approach for predicting time-optimal slew capability

    NASA Astrophysics Data System (ADS)

    King, Jeffery T.; Karpenko, Mark

    2016-03-01

    The productivity of space-based imaging satellite sensors to collect images is directly related to the agility of the spacecraft. Increasing the satellite agility, without changing the attitude control hardware, can be accomplished by using optimal control to design shortest-time maneuvers. The performance improvement that can be obtained using optimal control is tied to the specific configuration of the satellite, e.g. mass properties and reaction wheel array geometry. Therefore, it is generally difficult to predict performance without an extensive simulation study. This paper presents a simple idea for estimating the agility enhancement that can be obtained using optimal control without the need to solve any optimal control problems. The approach is based on the concept of the agility envelope, which expresses the capability of a spacecraft in terms of a three-dimensional agility volume. Validation of this new approach is conducted using both simulation and on-orbit data.

  20. Reliability based structural optimization - A simplified safety index approach

    NASA Technical Reports Server (NTRS)

    Reddy, Mahidhar V.; Grandhi, Ramana V.; Hopkins, Dale A.

    1993-01-01

    A probabilistic optimal design methodology for complex structures modelled with finite element methods is presented. The main emphasis is on developing probabilistic analysis tools suitable for optimization. An advanced second-moment method is employed to evaluate the failure probability of the performance function. The safety indices are interpolated using the information at mean and most probable failure point. The minimum weight design with an improved safety index limit is achieved by using the extended interior penalty method of optimization. Numerical examples covering beam and plate structures are presented to illustrate the design approach. The results obtained by using the proposed approach are compared with those obtained by using the existing probabilistic optimization techniques.

  1. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  2. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques

    PubMed Central

    2012-01-01

    Background Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations. This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Results Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. Conclusions In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the

  3. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  4. Optimal purchasing of raw materials: A data-driven approach

    SciTech Connect

    Muteki, K.; MacGregor, J.F.

    2008-06-15

    An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.

  5. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  6. A Collective Neurodynamic Optimization Approach to Nonnegative Matrix Factorization.

    PubMed

    Fan, Jianchao; Wang, Jun

    2017-10-01

    Nonnegative matrix factorization (NMF) is an advanced method for nonnegative feature extraction, with widespread applications. However, the NMF solution often entails to solve a global optimization problem with a nonconvex objective function and nonnegativity constraints. This paper presents a collective neurodynamic optimization (CNO) approach to this challenging problem. The proposed collective neurodynamic system consists of a population of recurrent neural networks (RNNs) at the lower level and a particle swarm optimization (PSO) algorithm with wavelet mutation at the upper level. The RNNs act as search agents carrying out precise local searches according to their neurodynamics and initial conditions. The PSO algorithm coordinates and guides the RNNs with updated initial states toward global optimal solution(s). A wavelet mutation operator is added to enhance PSO exploration diversity. Through iterative interaction and improvement of the locally best solutions of RNNs and global best positions of the whole population, the population-based neurodynamic systems are almost sure able to achieve the global optimality for the NMF problem. It is proved that the convergence of the group-best state to the global optimal solution with probability one. The experimental results substantiate the efficacy and superiority of the CNO approach to bound-constrained global optimization with several benchmark nonconvex functions and NMF-based clustering with benchmark data sets in comparison with the state-of-the-art algorithms.

  7. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  8. A Neurodynamic Optimization Approach to Bilevel Quadratic Programming.

    PubMed

    Qin, Sitian; Le, Xinyi; Wang, Jun

    2016-08-19

    This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.

  9. A continuous linear optimal transport approach for pattern analysis in image datasets

    PubMed Central

    Kolouri, Soheil; Tosun, Akif B.; Ozolek, John A.; Rohde, Gustavo K.

    2015-01-01

    We present a new approach to facilitate the application of the optimal transport metric to pattern recognition on image databases. The method is based on a linearized version of the optimal transport metric, which provides a linear embedding for the images. Hence, it enables shape and appearance modeling using linear geometric analysis techniques in the embedded space. In contrast to previous work, we use Monge's formulation of the optimal transport problem, which allows for reasonably fast computation of the linearized optimal transport embedding for large images. We demonstrate the application of the method to recover and visualize meaningful variations in a supervised-learning setting on several image datasets, including chromatin distribution in the nuclei of cells, galaxy morphologies, facial expressions, and bird species identification. We show that the new approach allows for high-resolution construction of modes of variations and discrimination and can enhance classification accuracy in a variety of image discrimination problems. PMID:26858466

  10. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  11. An alternative approach to the optimal design of an LD50 bioassay.

    PubMed

    Markus, R A; Frank, J; Groshen, S; Azen, S P

    1995-04-30

    In this paper we propose an alternative approach to the optimal design of an LD50 bioassay. We adopt a Bayesian approach to make use of prior information about the location and scale parameters of the tolerance distribution function to select the design parameters (number of doses, total number of animals, centre of doses, space between doses), and we adopt a frequentist approach using the Spearman-Karber statistic to estimate the LD50. We define the optimal design as the one that produces the minimum expected mean squared error E(MSE) with respect to the joint prior distribution of the parameters of the tolerance distribution. For the design parameters investigated, we found: (i) the shape of the E(MSE) is relatively smooth and continuous, the magnitude of which is influenced by the underlying tolerance distribution; (ii) the amount of prior information about the location and scale parameters independently and jointly affect the optimal design; and (iii) as the amount of prior information decreases, one requires more doses and/or animals. Finally, we show the proposed method is robust for an incorrectly assumed tolerance distribution function.

  12. Flood frequency analysis using multi-objective optimization based interval estimation approach

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K. S.; He, Jianxun; Tay, Joo-Hwa

    2017-02-01

    Flood frequency analysis (FFA) is a necessary tool for water resources management and water infrastructure design. Owing to the existence of variability in sample representation, distribution selection, and distribution parameter estimation, flood quantile estimation is subjected to various levels of uncertainty, which is not negligible and avoidable. Hence, alternative methods to the conventional approach of FFA are desired for quantifying the uncertainty such as in the form of prediction interval. The primary focus of the paper was to develop a novel approach to quantify and optimize the prediction interval resulted from the non-stationarity of data set, which is reflected in the distribution parameters estimated, in FFA. This paper proposed the combination of the multi-objective optimization approach and the ensemble simulation technique to determine the optimal perturbations of distribution parameters for constructing the prediction interval of flood quantiles in FFA. To demonstrate the proposed approach, annual maximum daily flow data collected from two gauge stations on the Bow River, Alberta, Canada, were used. The results suggest that the proposed method can successfully capture the uncertainty in quantile estimates qualitatively using the prediction interval, as the number of observations falling within the constructed prediction interval is approximately maximized while the prediction interval is minimized.

  13. An optimal torque distribution control strategy for four-independent wheel drive electric vehicles

    NASA Astrophysics Data System (ADS)

    Li, Bin; Goodarzi, Avesta; Khajepour, Amir; Chen, Shih-ken; Litkouhi, Baktiar

    2015-08-01

    In this paper, an optimal torque distribution approach is proposed for electric vehicle equipped with four independent wheel motors to improve vehicle handling and stability performance. A novel objective function is formulated which works in a multifunctional way by considering the interference among different performance indices: forces and moment errors at the centre of gravity of the vehicle, actuator control efforts and tyre workload usage. To adapt different driving conditions, a weighting factors tuning scheme is designed to adjust the relative weight of each performance in the objective function. The effectiveness of the proposed optimal torque distribution is evaluated by simulations with CarSim and Matlab/Simulink. The simulation results under different driving scenarios indicate that the proposed control strategy can effectively improve the vehicle handling and stability even in slippery road conditions.

  14. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  15. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  16. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  17. A robust optimization model for distribution and evacuation in the disaster response phase

    NASA Astrophysics Data System (ADS)

    Fereiduni, Meysam; Shahanaghi, Kamran

    2017-10-01

    Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.

  18. A robust optimization model for distribution and evacuation in the disaster response phase

    NASA Astrophysics Data System (ADS)

    Fereiduni, Meysam; Shahanaghi, Kamran

    2016-10-01

    Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.

  19. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  20. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  1. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  2. Network-Oriented Approach to Distributed Generation Planning

    NASA Astrophysics Data System (ADS)

    Kochukov, O.; Mutule, A.

    2017-06-01

    The main objective of the paper is to present an innovative complex approach to distributed generation planning and show the advantages over existing methods. The approach will be most suitable for DNOs and authorities and has specific calculation targets to support the decision-making process. The method can be used for complex distribution networks with different arrangement and legal base.

  3. Distribution-dependent robust linear optimization with applications to inventory control

    PubMed Central

    Kang, Seong-Cheol; Brisimi, Theodora S.

    2014-01-01

    This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this problem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain probabilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional information, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to “inject” less conservatism into the formulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%–54% cost savings, compared to the case where such information is not used. PMID:26347579

  4. Distribution-dependent robust linear optimization with applications to inventory control.

    PubMed

    Kang, Seong-Cheol; Brisimi, Theodora S; Paschalidis, Ioannis Ch

    2015-08-01

    This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this problem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain probabilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional information, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to "inject" less conservatism into the formulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%-54% cost savings, compared to the case where such information is not used.

  5. Optimal Control of Distributed Energy Resources using Model Predictive Control

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen

    2012-07-22

    In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.

  6. The use of linear programming in optimization of HDR implant dose distributions.

    PubMed

    Jozsef, Gabor; Streeter, Oscar E; Astrahan, Melvin A

    2003-05-01

    The introduction of high dose rate brachytherapy enabled optimization of dose distributions to be used on a routine basis. The objective of optimization is to homogenize the dose distribution within the implant while simultaneously satisfying dose constraints on certain points. This is accomplished by varying the time the source dwells at different locations. As the dose at any point is a linear function of the dwell times, a linear programming approach seems to be a natural choice. The dose constraints are inherently linear inequalities. Homogeneity requirements are linearized by minimizing the maximum deviation of the doses at points inside the implant from a prescribed dose. The revised simplex method was applied for the solution of this linear programming problem. In the homogenization process the possible source locations were chosen as optimization points. To avoid the problem of the singular value of the dose at a source location from the source itself we define the "self-contribution" as the dose at a small distance from the source. The effect of varying this distance is discussed. Test cases were optimized for planar, biplanar and cylindrical implants. A semi-irregular, fan-like implant with diverging needles was also investigated. Mean central dose calculation based on 3D Delaunay-triangulation of the source locations was used to evaluate the dose distributions. The optimization method resulted in homogeneous distributions (for brachytherapy). Additional dose constraints--when applied--were satisfied. The method is flexible enough to include other linear constraints such as the inclusion of the centroids of the Delaunay-triangulation for homogenization, or limiting the maximum allowable dwell time.

  7. An optimal control approach to probabilistic Boolean networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  8. Homotopy approach to optimal, linear quadratic, fixed architecture compensation

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1991-01-01

    Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.

  9. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  10. Optimal vibration control of curved beams using distributed parameter models

    NASA Astrophysics Data System (ADS)

    Liu, Fushou; Jin, Dongping; Wen, Hao

    2016-12-01

    The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.

  11. Optimal Solar PV Arrays Integration for Distributed Generation

    SciTech Connect

    Omitaomu, Olufemi A; Li, Xueping

    2012-01-01

    Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introduce quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.

  12. Global dynamic optimization approach to predict activation in metabolic pathways.

    PubMed

    de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R

    2014-01-06

    During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary

  13. Global dynamic optimization approach to predict activation in metabolic pathways

    PubMed Central

    2014-01-01

    Background During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been succesfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. Results In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. Conclusions The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to

  14. Applying Soft Arc Consistency to Distributed Constraint Optimization Problems

    NASA Astrophysics Data System (ADS)

    Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makot; Matsuo, Hiroshi

    The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.

  15. A benders decomposition approach to multiarea stochastic distributed utility planning

    NASA Astrophysics Data System (ADS)

    McCusker, Susan Ann

    Until recently, small, modular generation and storage options---distributed resources (DRs)---have been installed principally in areas too remote for economic power grid connection and sensitive applications requiring backup capacity. Recent regulatory changes and DR advances, however, have lead utilities to reconsider the role of DRs. To a utility facing distribution capacity bottlenecks or uncertain load growth, DRs can be particularly valuable since they can be dispersed throughout the system and constructed relatively quickly. DR value is determined by comparing its costs to avoided central generation expenses (i.e., marginal costs) and distribution investments. This requires a comprehensive central and local planning and production model, since central system marginal costs result from system interactions over space and time. This dissertation develops and applies an iterative generalized Benders decomposition approach to coordinate models for optimal DR evaluation. Three coordinated models exchange investment, net power demand, and avoided cost information to minimize overall expansion costs. Local investment and production decisions are made by a local mixed integer linear program. Central system investment decisions are made by a LP, and production costs are estimated by a stochastic multi-area production costing model with Kirchhoff's Voltage and Current Law constraints. The nested decomposition is a new and unique method for distributed utility planning that partitions the variables twice to separate local and central investment and production variables, and provides upper and lower bounds on expected expansion costs. Kirchhoff's Voltage Law imposes nonlinear, nonconvex constraints that preclude use of LP if transmission capacity is available in a looped transmission system. This dissertation develops KVL constraint approximations that permit the nested decomposition to consider new transmission resources, while maintaining linearity in the three

  16. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    PubMed

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows promise in optimizing the number

  17. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    SciTech Connect

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-15

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  18. Aftershock Energy Distribution by Statistical Mechanics Approach

    NASA Astrophysics Data System (ADS)

    Daminelli, R.; Marcellini, A.

    2015-12-01

    The aim of our work is to research the most probable distribution of the energy of aftershocks. We started by applying one of the fundamental principles of statistical mechanics that, in case of aftershock sequences, it could be expressed as: the greater the number of different ways in which the energy of aftershocks can be arranged among the energy cells in phase space the more probable the distribution. We assume that each cell in phase space has the same possibility to be occupied, and that more than one cell in the phase space can have the same energy. Seeing that seismic energy is proportional to products of different parameters, a number of different combinations of parameters can produce different energies (e.g., different combination of stress drop and fault area can release the same seismic energy). Let us assume that there are gi cells in the aftershock phase space characterised by the same energy released ɛi. Therefore we can assume that the Maxwell-Boltzmann statistics can be applied to aftershock sequences with the proviso that the judgment on the validity of this hypothesis is the agreement with the data. The aftershock energy distribution can therefore be written as follow: n(ɛ)=Ag(ɛ)exp(-βɛ)where n(ɛ) is the number of aftershocks with energy, ɛ, A and β are constants. Considering the above hypothesis, we can assume g(ɛ) is proportional to ɛ. We selected and analysed different aftershock sequences (data extracted from Earthquake Catalogs of SCEC, of INGV-CNT and other institutions) with a minimum magnitude retained ML=2 (in some cases ML=2.6) and a time window of 35 days. The results of our model are in agreement with the data, except in the very low energy band, where our model resulted in a moderate overestimation.

  19. Correction of linear-array lidar intensity data using an optimal beam shaping approach

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang

    2016-08-01

    The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.

  20. A common distributed language approach to software integration

    NASA Technical Reports Server (NTRS)

    Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.

    1989-01-01

    An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.

  1. Effects of optimism on creativity under approach and avoidance motivation

    PubMed Central

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690

  2. Effects of optimism on creativity under approach and avoidance motivation.

    PubMed

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement-barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences.

  3. Distributed bees algorithm parameters optimization for a cost efficient target allocation in swarms of robots.

    PubMed

    Jevtić, Aleksandar; Gutiérrez, Alvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the distributed bees algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA's control parameters by means of a genetic algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots' distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.

  4. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    PubMed Central

    Jevtić, Aleksandar; Gutiérrez, Álvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce. PMID:22346677

  5. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  6. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  7. A hybrid optimization approach in non-isothermal glass molding

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  8. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces.

  9. Activity-Centric Approach to Distributed Programming

    NASA Technical Reports Server (NTRS)

    Levy, Renato; Satapathy, Goutam; Lang, Jun

    2004-01-01

    The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.

  10. Co-optimal distribution of leaf nitrogen and hydraulic conductance in plant canopies.

    PubMed

    Peltoniemi, Mikko S; Duursma, Remko A; Medlyn, Belinda E

    2012-05-01

    Leaf properties vary significantly within plant canopies, due to the strong gradient in light availability through the canopy, and the need for plants to use resources efficiently. At high light, photosynthesis is maximized when leaves have a high nitrogen content and water supply, whereas at low light leaves have a lower requirement for both nitrogen and water. Studies of the distribution of leaf nitrogen (N) within canopies have shown that, if water supply is ignored, the optimal distribution is that where N is proportional to light, but that the gradient of N in real canopies is shallower than the optimal distribution. We extend this work by considering the optimal co-allocation of nitrogen and water supply within plant canopies. We developed a simple 'toy' two-leaf canopy model and optimized the distribution of N and hydraulic conductance (K) between the two leaves. We asked whether hydraulic constraints to water supply can explain shallow N gradients in canopies. We found that the optimal N distribution within plant canopies is proportional to the light distribution only if hydraulic conductance, K, is also optimally distributed. The optimal distribution of K is that where K and N are both proportional to incident light, such that optimal K is highest to the upper canopy. If the plant is constrained in its ability to construct higher K to sun-exposed leaves, the optimal N distribution does not follow the gradient in light within canopies, but instead follows a shallower gradient. We therefore hypothesize that measured deviations from the predicted optimal distribution of N could be explained by constraints on the distribution of K within canopies. Further empirical research is required on the extent to which plants can construct optimal K distributions, and whether shallow within-canopy N distributions can be explained by sub-optimal K distributions.

  11. Monitoring Distributed Systems: A Relational Approach.

    DTIC Science & Technology

    1982-12-01

    design and build . One would be hard pressed to find five people more suited to participate in this research. In addition to my committee, other members...thank you. Table of Contents Ii Table of Contents 1. Approach 1. The Problem 3 1.1. The Cause and the Result 3 1.2. Definitions 3 1.3. The Impact of...Data Flow 126 8.7. A Step Back 126 8.8. Evaluation 128 IV. Conclusion and Appendices 9. Conclusion 133 9.1. Surprises 135 9.2. Remaining Problems and

  12. Optimal service distribution in WSN service system subject to data security constraints.

    PubMed

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-08-04

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm.

  13. Optimal Service Distribution in WSN Service System Subject to Data Security Constraints

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-01-01

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346

  14. a Multivariate Approach to Optimize Subseafloor Observatory Designs

    NASA Astrophysics Data System (ADS)

    Lado Insua, T.; Moran, K.; Kulin, I.; Farrington, S.; Newman, J. B.; Morgan, S.

    2012-12-01

    Long-term monitoring of the subseafloor has become a more common practice in the last few decades. Systems such as the Circulation Obviation Retrofit Kit (CORK) have been used since the 1970s to provide the scientific community with time series measurements of geophysical properties below the seafloor and in the latest versions with pore water sampling over time. The Simple Cabled Instrument for Measuring Parameters In-Situ (SCIMPI) is a new observatory instrument designed to study dynamic processes in the sub-seabed. SCIMPI makes time series measurements of temperature, pressure and electrical resistivity at a series of depths in the sub-seafloor, tailored for site-specific scientific objectives. SCIMPI's modular design enables this type of site-specific configuration, based on the study goals, combined with the sub-seafloor characteristics. The instrument is designed to take measurements in dynamic environments. After four years in development, SCIMPI is scheduled for first deployment on the Cascadia Margin within the NEPTUNE Canada observatory network. SCIMPI's flexible modular design simplifies the deployment and reduces the cost of measurements of physical properties. SCIMPI is expected to expand subseafloor observations into softer sediments and multiple depth intervals. In any observation system, the locations and number of sensors is a compromise between scientific objectives and cost. The subseafloor sensor positions within an observatory borehole have been determined in the past by identifying the major lithologies or major flux areas, based on individual analysis of the physical properties and logging measurements of the site. Here we present a multivariate approach for identifying the most significant depth intervals to instrument for long-term subseafloor observatories. Where borehole data are available (wireline logging, logging while drilling, physical properties and chemistry measurements), this approach will optimize the locations using an unbiased

  15. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness.

  16. Accounting for the tongue-and-groove effect using a robust direct aperture optimization approach.

    PubMed

    Salari, Ehsan; Men, Chunhua; Romeijn, H Edwin

    2011-03-01

    Traditionally, the tongue-and-groove effect due to the multileaf collimator architecture in intensity-modulated radiation therapy (IMRT) has typically been deferred to the leaf sequencing stage. The authors propose a new direct aperture optimization method for IMRT treatment planning that explicitly incorporates dose calculation inaccuracies due to the tongue-and-groove effect into the treatment plan optimization stage. The authors avoid having to accurately estimate the dosimetric effects of the tongue-and-groove architecture by using lower and upper bounds on the dose distribution delivered to the patient. They then develop a model that yields a treatment plan that is robust with respect to the corresponding dose calculation inaccuracies. Tests on a set of ten clinical head-and-neck cancer cases demonstrate the effectiveness of the new method in developing robust treatment plans with tight dose distributions in targets and critical structures. This is contrasted with the very loose bounds on the dose distribution that are obtained by solving a traditional treatment plan optimization model that ignores tongue-and-groove effects in the treatment planning stage. A robust direct aperture optimization approach is proposed to account for the dosimetric inaccuracies caused by the tongue-and-groove effect. The experiments validate the ability of the proposed approach in designing robust treatment plans regardless of the exact consequences of the tongue-and-groove architecture.

  17. TSP based Evolutionary optimization approach for the Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Kouki, Zoulel; Chaar, Besma Fayech; Ksouri, Mekki

    2009-03-01

    Vehicle Routing and Flexible Job Shop Scheduling Problems (VRP and FJSSP) are two common hard combinatorial optimization problems that show many similarities in their conceptual level [2, 4]. It was proved for both problems that solving techniques like exact methods fail to provide good quality solutions in a reasonable amount of time when dealing with large scale instances [1, 5, 14]. In order to overcome this weakness, we decide in the favour of meta heuristics and we focalize on evolutionary algorithms that have been successfully used in scheduling problems [1, 5, 9]. In this paper we investigate the common properties of the VRP and the FJSSP in order to provide a new controlled evolutionary approach for the CVRP optimization inspired by the FJSSP evolutionary optimization algorithms introduced in [10].

  18. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  19. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  20. Learning with distribution of optimized features for recognizing common CT imaging signs of lung diseases

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Liu, Xiabi; Fei, Baowei

    2017-01-01

    Common CT imaging signs of lung diseases (CISLs) are defined as the imaging signs that frequently appear in lung CT images from patients. CISLs play important roles in the diagnosis of lung diseases. This paper proposes a novel learning method, namely learning with distribution of optimized feature (DOF), to effectively recognize the characteristics of CISLs. We improve the classification performance by learning the optimized features under different distributions. Specifically, we adopt the minimum spanning tree algorithm to capture the relationship between features and discriminant ability of features for selecting the most important features. To overcome the problem of various distributions in one CISL, we propose a hierarchical learning method. First, we use an unsupervised learning method to cluster samples into groups based on their distribution. Second, in each group, we use a supervised learning method to train a model based on their categories of CISLs. Finally, we obtain multiple classification decisions from multiple trained models and use majority voting to achieve the final decision. The proposed approach has been implemented on a set of 511 samples captured from human lung CT images and achieves a classification accuracy of 91.96%. The proposed DOF method is effective and can provide a useful tool for computer-aided diagnosis of lung diseases on CT images.

  1. Optimal control of suspended sediment distribution model of Talaga lake

    NASA Astrophysics Data System (ADS)

    Ratianingsih, R.; Resnawati, Azim, Mardlijah, Widodo, B.

    2017-08-01

    Talaga Lake is one of several lakes in Central Sulawesi that potentially to be managed in multi purposes scheme because of its characteristic. The scheme is addressed not only due to the lake maintenance because of its sediment but also due to the Algae farming for its biodiesel fuel. This paper governs a suspended sediment distribution model of Talaga lake. The model is derived from the two dimensional hydrodynamic shallow water equations of the mass and momentum conservation law of sediment transport. An order reduction of the model gives six equations of hyperbolic systems of the depth, two dimension directional velocities and sediment concentration while the bed elevation as the second order of turbulent diffusion and dispersion are neglected. The system is discreted and linearized such that could be solved numerically by box-Keller method for some initial and boundary condition. The solutions shows that the downstream velocity is play a role in transversal direction of stream function flow. The downstream accumulated sediment indicate that the suspended sediment and its changing should be controlled by optimizing the downstream velocity and transversal suspended sediment changing due to the ideal algae growth need.

  2. Excipient quantitation and drug distribution during formulation optimization.

    PubMed

    Forget, Robert; Spagnoli, Suzanne

    2006-06-07

    An oral granules formulation experienced high drug content and increased variability when the process was scaled up from lab scale to clinical manufacturing scale. It was suspected that mannitol, due to its smaller particle size and lower density, was preferentially lost during the top spray granulation process, thereby causing active enrichment in the remaining granules. In order to troubleshoot the problem, rapidly evaluate solutions, and further optimize the formulation, a simple and rapid analytical technique was required. Since mannitol does not have a UV chromophore, conventional HPLC/UV analysis could not be used. Three alternative analytical techniques were evaluated in terms of ease of use, reproducibility, linear dynamic range and rapidity. The HPLC/RID (refractive index detector) and HPLC/ELSD (evaporative light scattering detector) provided rapid, reproducible alternate techniques to HPLC/UV, whereas LC/MS showed poor reproducibility. Analysis of the sieve samples of the granulations by HPLC/RID and HPLC/ELSD confirmed that poor active drug distribution was due to mannitol losses in the filter bag, as well as increased low size granules low in active drug content. The resultant formulation process was modified and a reduction in the initial air flow at start-up reduced losses of mannitol in the granulator filters.

  3. Optimization based on benefit of regional energy suppliers of distributed generation in active distribution network

    NASA Astrophysics Data System (ADS)

    Huo, Xianxu; Li, Guodong; Jiang, Ling; Wang, Xudong

    2017-08-01

    With the development of electricity market, distributed generation (DG) technology and related policies, regional energy suppliers are encouraged to build DG. Under this background, the concept of active distribution network (ADN) is put forward. In this paper, a bi-level model of intermittent DG considering benefit of regional energy suppliers is proposed. The objective of the upper level is the maximization of benefit of regional energy suppliers. On this basis, the lower level is optimized for each scene. The uncertainties of DG output and load of users, as well as four active management measures, which include demand-side management, curtailing the output power of DG, regulating reactive power compensation capacity and regulating the on-load tap changer, are considered. Harmony search algorithm and particle swarm optimization are combined as a hybrid strategy to solve the model. This model and strategy are tested with IEEE-33 node system, and results of case study indicate that the model and strategy successfully increase the capacity of DG and benefit of regional energy suppliers.

  4. Simultaneous optimization of dose distributions and fractionation schemes in particle radiotherapy

    SciTech Connect

    Unkelbach, Jan; Zeng, Chuan; Engelsman, Martijn

    2013-09-15

    Purpose: The paper considers the fractionation problem in intensity modulated proton therapy (IMPT). Conventionally, IMPT fields are optimized independently of the fractionation scheme. In this work, we discuss the simultaneous optimization of fractionation scheme and pencil beam intensities.Methods: This is performed by allowing for distinct pencil beam intensities in each fraction, which are optimized using objective and constraint functions based on biologically equivalent dose (BED). The paper presents a model that mimics an IMPT treatment with a single incident beam direction for which the optimal fractionation scheme can be determined despite the nonconvexity of the BED-based treatment planning problem.Results: For this model, it is shown that a small α/β ratio in the tumor gives rise to a hypofractionated treatment, whereas a large α/β ratio gives rise to hyperfractionation. It is further demonstrated that, for intermediate α/β ratios in the tumor, a nonuniform fractionation scheme emerges, in which it is optimal to deliver different dose distributions in subsequent fractions. The intuitive explanation for this phenomenon is as follows: By varying the dose distribution in the tumor between fractions, the same total BED can be achieved with a lower physical dose. If it is possible to achieve this dose variation in the tumor without varying the dose in the normal tissue (which would have an adverse effect), the reduction in physical dose may lead to a net reduction of the normal tissue BED. For proton therapy, this is indeed possible to some degree because the entrance dose is mostly independent of the range of the proton pencil beam.Conclusions: The paper provides conceptual insight into the interdependence of optimal fractionation schemes and the spatial optimization of dose distributions. It demonstrates the emergence of nonuniform fractionation schemes that arise from the standard BED model when IMPT fields and fractionation scheme are optimized

  5. A novel linear programming approach to fluence map optimization for intensity modulated radiation therapy treatment planning.

    PubMed

    Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G

    2003-11-07

    We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (> 95%), target homogeneity (< 10% overdosing and < 7% underdosing) and organ sparing using at least one of the two models.

  6. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  7. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  8. Blood platelet production: a novel approach for practical optimization.

    PubMed

    van Dijk, Nico; Haijema, René; van der Wal, Jan; Sibinga, Cees Smit

    2009-03-01

    The challenge of production and inventory management for blood platelets (PLTs) is the requirement to meet highly uncertain demands. Shortages are to be minimized, if not to be avoided at all. Overproduction, in turn, leads to high levels of outdating as PLTs have a limited "shelf life." Outdating is to be minimized for ethical and cost reasons. Operations research (OR) methodology was applied to the PLT inventory management problem. The problem can be formulated in a general mathematical form. To solve this problem, a five-step procedure was used. This procedure is based on a combination of two techniques, a mathematical technique called stochastic dynamic programming (SDP) and computer simulation. The approach identified an optimal production policy, leading to the computation of a simple and nearly optimal PLT production "order-up-to" rule. This rule prescribes a fixed order-up-to level for each day of the week. The approach was applied to a test study with actual data for a regional Dutch blood bank. The main finding in the test study was that outdating could be reduced from 15-20 percent to less than 0.1 percent with virtually no shortages. Blood group preferences and extending the shelf life of more than 5 days appeared to be of marginal effect. In this article the worlds of blood management and the mathematical discipline of OR are brought together for the optimization of blood PLT production. This leads to simple nearly optimal blood PLT production policies that are suitable for practical implementation.

  9. An evolutionary based Bayesian design optimization approach under incomplete information

    NASA Astrophysics Data System (ADS)

    Srivastava, Rupesh; Deb, Kalyanmoy

    2013-02-01

    Design optimization in the absence of complete information about uncertain quantities has been recently gaining consideration, as expensive repetitive computation tasks are becoming tractable due to the invention of faster and parallel computers. This work uses Bayesian inference to quantify design reliability when only sample measurements of the uncertain quantities are available. A generalized Bayesian reliability based design optimization algorithm has been proposed and implemented for numerical as well as engineering design problems. The approach uses an evolutionary algorithm (EA) to obtain a trade-off front between design objectives and reliability. The Bayesian approach provides a well-defined link between the amount of available information and the reliability through a confidence measure, and the EA acts as an efficient optimizer for a discrete and multi-dimensional objective space. Additionally, a GPU-based parallelization study shows computational speed-up of close to 100 times in a simulated scenario wherein the constraint qualification checks may be time consuming and could render a sequential implementation that can be impractical for large sample sets. These results show promise for the use of a parallel implementation of EAs in handling design optimization problems under uncertainties.

  10. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  11. Optimized probabilistic quantum processors: A unified geometric approach 1

    NASA Astrophysics Data System (ADS)

    Bergou, Janos; Bagan, Emilio; Feldman, Edgar

    Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry

  12. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  13. A fractal-based approach to lake size-distributions

    NASA Astrophysics Data System (ADS)

    Seekell, David A.; Pace, Michael L.; Tranvik, Lars J.; Verpoorter, Charles

    2013-02-01

    The abundance and size distribution of lakes is critical to assessing the role of lakes in regional and global biogeochemical processes. Lakes are fractal but do not always conform to the power law size-distribution typically associated with fractal geographical features. Here, we evaluate the fractal geometry of lakes with the goal of explaining apparently inconsistent observations of power law and non-power law lake size-distributions. The power law size-distribution is a special case for lakes near the mean elevation. Lakes in flat regions are power law distributed, while lakes in mountainous regions deviate from power law distributions. Empirical analyses of lake size data sets from the Adirondack Mountains in New York and the flat island of Gotland in Sweden support this finding. Our approach provides a unifying framework for lake size-distributions, indicates that small lakes cannot dominate total lake surface area, and underscores the importance of regional hypsometry in influencing lake size-distributions.

  14. Optimizing the Distribution of United States Army Officers

    DTIC Science & Technology

    2005-09-01

    17 3. Xpress ..................................................................................................18 4...18 B. GLOBAL DISTRIBUTION..........................................................................20 C. TWO-STEP DISTRIBUTION... Global War on Terror. The challenge is distributing a fluctuating officer inventory among changing requirements while satisfying competing demands

  15. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach.

    PubMed

    Pinto, Rafael S; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ω(T)Lω, where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  16. Computational Approaches for Microalgal Biofuel Optimization: A Review

    PubMed Central

    Chaiboonchoe, Amphun

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  17. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  18. A global optimization approach for Lennard-Jones microclusters

    NASA Astrophysics Data System (ADS)

    Maranas, Costas D.; Floudas, Christodoulos A.

    1992-11-01

    A global optimization approach is proposed for finding the global minimum energy configuration of Lennard-Jones microclusters. First, the original nonconvex total potential energy function, composed by rational polynomials, is transformed to the difference of two convex functions (DC transformation) via a novel procedure performed for each pair potential that constitute the total potential energy function. Then, a decomposition strategy based on the global optimization (GOP) algorithm [C. A. Floudas and V. Visweswaran, Comput. Chem. Eng. 14, 1397 (1990); V. Visweswaran and C. A. Floudas, ibid. 14, 1419 (1990); Proc. Process Systems Eng. 1991, I.6.1; C. A. Floudas and V. Visweswaran, J. Opt. Theory Appl. (in press)] is designed to provide tight bounds on the global minimum through the solutions of a sequence of relaxed dual subproblems. A number of theoretical results are included which expedite the computational effort by exploiting the special mathematical structure of the problem. The proposed approach attains ɛ convergence to the global minimum in a finite number of iterations. Based on this procedure, global optimum solutions are generated for small microclusters n≤7. For larger clusters 8≤N≤24 tight lower and upper bounds on the global solution are provided serving as excellent initial points for local optimization approaches. Finally, improved lower bounds on the minimum interparticle distance at the global minimum are provided.

  19. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  20. Algebraic Approach for Recovering Topology in Distributed Camera Networks

    DTIC Science & Technology

    2009-01-14

    Algebraic Approach for Recovering Topology in Distributed Camera Networks Edgar J. Lobaton Parvez Ahammad S. Shankar Sastry Electrical Engineering...Topology in Distributed Camera Networks Edgar J. Lobaton , Parvez Ahammad, S. Shankar Sastry ∗† January 14, 2009 Abstract Camera networks are widely used...well as a real-world experimental set-up. Our proposed approach ∗E.J. Lobaton and S.S. Sastry are with the Electrical Engineering and Computer

  1. Multiplicative approximations, optimal hypervolume distributions, and the choice of the reference point.

    PubMed

    Friedrich, Tobias; Neumann, Frank; Thyssen, Christian

    2015-01-01

    Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations.

  2. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  3. Unified approach to optimal control systems with state constraints

    NASA Astrophysics Data System (ADS)

    Murillo, Martin Julio

    Many engineering systems have constraints or limitations in terms of voltage, current, speed, pressure, temperature, path, etc. In this dissertation, the optimal control of dynamical systems with state constraints is addressed. A unified approach that is simultaneously applicable to both continuous-time and discrete-time systems is developed so that there is no need, as being presently done, to develop separate methodologies for continuous-tune and discrete-time systems. The main contributions of the dissertation are: (1) development of a "slack variable" approach to solve discrete-time state constrained problems1, (2) development of a unified approach to solve state unconstrained problems, (3) development of a unified approach to solve state constrained problems, and (4) development of numerical algorithms and software implementation to solve these problems. 1This work was accepted for presentation with the citation: M. Murillo and D. S. Naidu, "Discrete-time optimal control systems with state constraints", AIAA Guidance, Control, and Navigation (GN&C) Conference and Exhibit, Monterey, CA, August 5--8, 2002.

  4. Standardized approach for developing probabilistic exposure factor distributions

    SciTech Connect

    Maddalena, Randy L.; McKone, Thomas E.; Sohn, Michael D.

    2003-03-01

    The effectiveness of a probabilistic risk assessment (PRA) depends critically on the quality of input information that is available to the risk assessor and specifically on the probabilistic exposure factor distributions that are developed and used in the exposure and risk models. Deriving probabilistic distributions for model inputs can be time consuming and subjective. The absence of a standard approach for developing these distributions can result in PRAs that are inconsistent and difficult to review by regulatory agencies. We present an approach that reduces subjectivity in the distribution development process without limiting the flexibility needed to prepare relevant PRAs. The approach requires two steps. First, we analyze data pooled at a population scale to (1) identify the most robust demographic variables within the population for a given exposure factor, (2) partition the population data into subsets based on these variables, and (3) construct archetypal distributions for each subpopulation. Second, we sample from these archetypal distributions according to site- or scenario-specific conditions to simulate exposure factor values and use these values to construct the scenario-specific input distribution. It is envisaged that the archetypal distributions from step 1 will be generally applicable so risk assessors will not have to repeatedly collect and analyze raw data for each new assessment. We demonstrate the approach for two commonly used exposure factors--body weight (BW) and exposure duration (ED)--using data for the U.S. population. For these factors we provide a first set of subpopulation based archetypal distributions along with methodology for using these distributions to construct relevant scenario-specific probabilistic exposure factor distributions.

  5. A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun

    2017-08-01

    This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.

  6. A Robot Trajectory Optimization Approach for Thermal Barrier Coatings Used for Free-Form Components

    NASA Astrophysics Data System (ADS)

    Cai, Zhenhua; Qi, Beichun; Tao, Chongyuan; Luo, Jie; Chen, Yuepeng; Xie, Changjun

    2017-10-01

    This paper is concerned with a robot trajectory optimization approach for thermal barrier coatings. As the requirements of high reproducibility of complex workpieces increase, an optimal thermal spraying trajectory should not only guarantee an accurate control of spray parameters defined by users (e.g., scanning speed, spray distance, scanning step, etc.) to achieve coating thickness homogeneity but also help to homogenize the heat transfer distribution on the coating surface. A mesh-based trajectory generation approach is introduced in this work to generate path curves on a free-form component. Then, two types of meander trajectories are generated by performing a different connection method. Additionally, this paper presents a research approach for introducing the heat transfer analysis into the trajectory planning process. Combining heat transfer analysis with trajectory planning overcomes the defects of traditional trajectory planning methods (e.g., local over-heating), which helps form the uniform temperature field by optimizing the time sequence of path curves. The influence of two different robot trajectories on the process of heat transfer is estimated by coupled FEM models which demonstrates the effectiveness of the presented optimization approach.

  7. Optimal control of distributed parameter systems using adaptive critic neural networks

    NASA Astrophysics Data System (ADS)

    Padhi, Radhakant

    In this dissertation, two systematic optimal control synthesis techniques are presented for distributed parameter systems based on the adaptive critic neural networks. Following the philosophy of dynamic programming, this adaptive critic optimal control synthesis approach has many desirable features, viz. having a feedback form of the control, ability for on-line implementation, no need for approximating the nonlinear system dynamics, etc. More important, unlike the dynamic programming, it can accomplish these objectives without getting overwhelmed by the computational and storage requirements. First, an approximate dynamic programming based adaptive critic control synthesis formulation was carried out assuming an approximation of the system dynamics in a discrete form. A variety of example problems were solved using this proposed general approach. Next a different formulation is presented, which is capable of directly addressing the continuous form of system dynamics for control design. This was obtained following the methodology of Galerkin projection based weighted residual approximation using a set of orthogonal basis functions. The basis functions were designed by with the help of proper orthogonal decomposition, which leads to a very low-dimensional lumped parameter representation. The regulator problems of linear and nonlinear heat equations were revisited. Optimal controllers were synthesized first assuming a continuous controller and then a set of discrete controllers in the spatial domain. Another contribution of this study is the formulation of simplified adaptive critics for a large class of problems, which can be interpreted as a significant improvement of the existing adaptive critic technique.

  8. Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid

    NASA Astrophysics Data System (ADS)

    Padée, Adam; Kurek, Krzysztof; Zaremba, Krzysztof

    2013-08-01

    Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters.

  9. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  10. Portfolio optimization in enhanced index tracking with goal programming approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  11. General approach and scope. [rotor blade design optimization

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    This paper describes a joint activity involving NASA and Army researchers at the NASA Langley Research Center to develop optimization procedures aimed at improving the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure will be closely coupled, while acoustics and airframe dynamics will be decoupled and be accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is to be integrated with the first three disciplines. Finally, in phase 3, airframe dynamics will be fully integrated with the other four disciplines. This paper deals with details of the phase 1 approach and includes details of the optimization formulation, design variables, constraints, and objective function, as well as details of discipline interactions, analysis methods, and methods for validating the procedure.

  12. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  13. A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail

    SciTech Connect

    Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson

    2013-08-15

    Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

  14. A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail.

    PubMed

    Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson

    2013-08-01

    Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier. In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions. The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality. The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

  15. Stochastic real-time optimal control: A pseudospectral approach for bearing-only trajectory optimization

    NASA Astrophysics Data System (ADS)

    Ross, Steven M.

    A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is presented, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed recursively using Radau pseudospectral collocation methods and sequential quadratic programming for the optimal path and an Unscented Kalman Filter for the target position estimate. The underlying real-time optimal control (RTOC) algorithm is developed, specifically addressing limitations of current techniques that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. System integration, variable timing methods, and discontinuity management techniques are provided for actual hardware implementation. Validation is accomplished with both simulation and flight test, autonomously landing a quadrotor helicopter on a wire.

  16. Distributed Generators Allocation in Radial Distribution Systems with Load Growth using Loss Sensitivity Approach

    NASA Astrophysics Data System (ADS)

    Kumar, Ashwani; Vijay Babu, P.; Murty, V. V. S. N.

    2016-07-01

    Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of distributed generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. The objective of the paper is to reduce the power losses and improve the voltage profile of the radial distribution system with optimal allocation of the multiple DG in the system. The main contribution in this paper is (i) combined power loss sensitivity (CPLS) based method for multiple DG locations, (ii) determination of optimal sizes for multiple DG units at unity and lagging power factor, (iii) impact of DG installed at optimal, that is, combined load power factor on the system performance, (iv) impact of load growth on optimal DG planning, (v) Impact of DG integration in distribution systems on voltage stability index, (vi) Economic and technical Impact of DG integration in the distribution systems. The load growth factor has been considered in the study which is essential for planning and expansion of the existing systems. The technical and economic aspects are investigated in terms of improvement in voltage profile, reduction in total power losses, cost of energy loss, cost of power obtained from DG, cost of power intake from the substation, and savings in cost of energy loss. The results are obtained on IEEE 69-bus radial distribution systems and also compared with other existing methods.

  17. Distributed Generators Allocation in Radial Distribution Systems with Load Growth using Loss Sensitivity Approach

    NASA Astrophysics Data System (ADS)

    Kumar, Ashwani; Vijay Babu, P.; Murty, V. V. S. N.

    2017-06-01

    Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of distributed generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. The objective of the paper is to reduce the power losses and improve the voltage profile of the radial distribution system with optimal allocation of the multiple DG in the system. The main contribution in this paper is (i) combined power loss sensitivity (CPLS) based method for multiple DG locations, (ii) determination of optimal sizes for multiple DG units at unity and lagging power factor, (iii) impact of DG installed at optimal, that is, combined load power factor on the system performance, (iv) impact of load growth on optimal DG planning, (v) Impact of DG integration in distribution systems on voltage stability index, (vi) Economic and technical Impact of DG integration in the distribution systems. The load growth factor has been considered in the study which is essential for planning and expansion of the existing systems. The technical and economic aspects are investigated in terms of improvement in voltage profile, reduction in total power losses, cost of energy loss, cost of power obtained from DG, cost of power intake from the substation, and savings in cost of energy loss. The results are obtained on IEEE 69-bus radial distribution systems and also compared with other existing methods.

  18. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  19. Phase retrieval with transverse translation diversity: a nonlinear optimization approach.

    PubMed

    Guizar-Sicairos, Manuel; Fienup, James R

    2008-05-12

    We develop and test a nonlinear optimization algorithm for solving the problem of phase retrieval with transverse translation diversity, where the diverse far-field intensity measurements are taken after translating the object relative to a known illumination pattern. Analytical expressions for the gradient of a squared-error metric with respect to the object, illumination and translations allow joint optimization of the object and system parameters. This approach achieves superior reconstructions, with respect to a previously reported technique [H. M. L. Faulkner and J. M. Rodenburg, Phys. Rev. Lett. 93, 023903 (2004)], when the system parameters are inaccurately known or in the presence of noise. Applicability of this method for samples that are smaller than the illumination pattern is explored.

  20. Optimal approach to quantum communication using dynamic programming

    PubMed Central

    Jiang, Liang; Taylor, Jacob M.; Khaneja, Navin; Lukin, Mikhail D.

    2007-01-01

    Reliable preparation of entanglement between distant systems is an outstanding problem in quantum information science and quantum communication. In practice, this has to be accomplished by noisy channels (such as optical fibers) that generally result in exponential attenuation of quantum signals at large distances. A special class of quantum error correction protocols, quantum repeater protocols, can be used to overcome such losses. In this work, we introduce a method for systematically optimizing existing protocols and developing more efficient protocols. Our approach makes use of a dynamic programming-based searching algorithm, the complexity of which scales only polynomially with the communication distance, letting us efficiently determine near-optimal solutions. We find significant improvements in both the speed and the final-state fidelity for preparing long-distance entangled states. PMID:17959783

  1. A Bayesian optimization approach for wind farm power maximization

    NASA Astrophysics Data System (ADS)

    Park, Jinkyoo; Law, Kincho H.

    2015-03-01

    The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.

  2. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    NASA Astrophysics Data System (ADS)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  3. Evaluating the Effects of the Optimization on the Quality of Distributed Applications

    ERIC Educational Resources Information Center

    Dumitrascu, Eugen; Popa, Marius

    2007-01-01

    In this paper, we present the characteristic features of distributed applications. We also enumerate the modalities of optimizing them and the factors that influence the quality of distributed applications, as well as the way they are affected by the optimization processes. Moreover, we enumerate the quality characteristics of distributed…

  4. Model optimization of orthotropic distributed-mode loudspeaker using attached masses.

    PubMed

    Lu, Guochao; Shen, Yong

    2009-11-01

    The orthotropic model of the plate is established and the genetic simulated annealing algorithm is developed for optimization of the mode distribution of the orthotropic plate. The experiment results indicate that the orthotropic model can simulate the real plate better. And optimization aimed at the equal distribution of the modes in the orthotropic model is made to improve the corresponding sound pressure responses.

  5. OPTIMAL SCHEDULING OF BOOSTER DISINFECTION IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Booster disinfection is the addition of disinfectant at locations distributed throughout a water distribution system. Such a strategy can reduce the mass of disinfectant required to maintain a detectable residual at points of consumption in the distribution system, which may lea...

  6. OPTIMAL SCHEDULING OF BOOSTER DISINFECTION IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Booster disinfection is the addition of disinfectant at locations distributed throughout a water distribution system. Such a strategy can reduce the mass of disinfectant required to maintain a detectable residual at points of consumption in the distribution system, which may lea...

  7. Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables

    NASA Astrophysics Data System (ADS)

    Ouyang, Bo; Shang, Weiwei

    2016-03-01

    The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.

  8. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  9. Approaching direct optimization of as-built lens performance

    NASA Astrophysics Data System (ADS)

    McGuire, James P.; Kuper, Thomas G.

    2012-10-01

    We describe a method approaching direct optimization of the rms wavefront error of a lens including tolerances. By including the effect of tolerances in the error function, the designer can choose to improve the as-built performance with a fixed set of tolerances and/or reduce the cost of production lenses with looser tolerances. The method relies on the speed of differential tolerance analysis and has recently become practical due to the combination of continuing increases in computer hardware speed and multiple core processing We illustrate the method's use on a Cooke triplet, a double Gauss, and two plastic mobile phone camera lenses.

  10. Multidisciplinary Design Optimization Under Uncertainty: An Information Model Approach (PREPRINT)

    DTIC Science & Technology

    2011-03-01

    and c ∈ R, which is easily solved using the MatLab function fmincon. The reader is cautioned not to optimize over (t, p, c). Our approach requires a...would have to be expanded. The fifteen formulas can serve as the basis for numerical simulations, an easy task using MatLab . 5.3 Simulation of the higher...Design 130, 2008, 081402-1 – 081402-12. [32] M. Loève, ” Fonctions aléatoires du second ordre,” Suplement to P. Lévy, Pro- cessus Stochastiques et

  11. Perspective: Codesign for materials science: An optimal learning approach

    NASA Astrophysics Data System (ADS)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  12. The optimal solution prediction for genetic and distribution building algorithms with binary representation

    NASA Astrophysics Data System (ADS)

    Sopov, E.; Semenkina, O.

    2015-01-01

    Genetic and distribution building algorithms with binary representation are analyzed. A property of convergence to the optimal solution is discussed. A novel convergence prediction method is proposed and investigated. The method is based on analysis of gene value probabilities distribution dynamics, thus it can predict gene values of the optimal solution to which the algorithm converges. The results of investigations for the optimal prediction algorithm performance are presented.

  13. The dependence of optimal fractionation schemes on the spatial dose distribution

    NASA Astrophysics Data System (ADS)

    Unkelbach, Jan; Craft, David; Salari, Ehsan; Ramakrishnan, Jagdish; Bortfeld, Thomas

    2013-01-01

    We consider the fractionation problem in radiation therapy. Tumor sites in which the dose-limiting organ at risk (OAR) receives a substantially lower dose than the tumor, bear potential for hypofractionation even if the α/β-ratio of the tumor is larger than the α/β-ratio of the OAR. In this work, we analyze the interdependence of the optimal fractionation scheme and the spatial dose distribution in the OAR. In particular, we derive a criterion under which a hypofractionation regimen is indicated for both a parallel and a serial OAR. The approach is based on the concept of the biologically effective dose (BED). For a hypothetical homogeneously irradiated OAR, it has been shown that hypofractionation is suggested by the BED model if the α/β-ratio of the OAR is larger than α/β-ratio of the tumor times the sparing factor, i.e. the ratio of the dose received by the tumor and the OAR. In this work, we generalize this result to inhomogeneous dose distributions in the OAR. For a parallel OAR, we determine the optimal fractionation scheme by minimizing the integral BED in the OAR for a fixed BED in the tumor. For a serial structure, we minimize the maximum BED in the OAR. This leads to analytical expressions for an effective sparing factor for the OAR, which provides a criterion for hypofractionation. The implications of the model are discussed for lung tumor treatments. It is shown that the model supports hypofractionation for small tumors treated with rotation therapy, i.e. highly conformal techniques where a large volume of lung tissue is exposed to low but nonzero dose. For larger tumors, the model suggests hyperfractionation. We further discuss several non-intuitive interdependencies between optimal fractionation and the spatial dose distribution. For instance, lowering the dose in the lung via proton therapy does not necessarily provide a biological rationale for hypofractionation.

  14. Forging tool shape optimization using pseudo inverse approach and adaptive incremental approach

    NASA Astrophysics Data System (ADS)

    Halouani, A.; Meng, F. J.; Li, Y. M.; Labergère, C.; Abbès, B.; Lafon, P.; Guo, Y. Q.

    2013-05-01

    This paper presents a simplified finite element method called "Pseudo Inverse Approach" (PIA) for tool shape design and optimization in multi-step cold forging processes. The approach is based on the knowledge of the final part shape. Some intermediate configurations are introduced and corrected by using a free surface method to consider the deformation paths without contact treatment. A robust direct algorithm of plasticity is implemented by using the equivalent stress notion and tensile curve. Numerical tests have shown that the PIA is very fast compared to the incremental approach. The PIA is used in an optimization procedure to automatically design the shapes of the preform tools. Our objective is to find the optimal preforms which minimize the equivalent plastic strain and punch force. The preform shapes are defined by B-Spline curves. A simulated annealing algorithm is adopted for the optimization procedure. The forging results obtained by the PIA are compared to those obtained by the incremental approach to show the efficiency and accuracy of the PIA.

  15. Optimisation of polymer foam bubble expansion in extruder by resident time distribution approach

    NASA Astrophysics Data System (ADS)

    Larochette, Mathieu; Graebling, Didier; Léonardi, Frédéric

    2007-04-01

    In this work, we used the Residence Time Distribution (RTD) to study the polystyrene foaming during an extrusion process. The extruder associated with a gear pump is simply and quantitatively described by three continuoustly stirred tank reactors with recycling loops and one plug-flow reactor. The blowing agent used is CO2 and its obtained by thermal decomposition of a chemical blowing agent (CBA). This approach allows to optimize the density of the foam in accordance with the CBA kinetic of decomposition.

  16. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  17. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  18. Optimization of minoxidil microemulsions using fractional factorial design approach.

    PubMed

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3.

  19. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  20. Silanization of glass chips—A factorial approach for optimization

    NASA Astrophysics Data System (ADS)

    Vistas, Cláudia R.; Águas, Ana C. P.; Ferreira, Guilherme N. M.

    2013-12-01

    Silanization of glass chips with 3-mercaptopropyltrimethoxysilane (MPTS) was investigated and optimized to generate a high-quality layer with well-oriented thiol groups. A full factorial design was used to evaluate the influence of silane concentration and reaction time. The stabilization of the silane monolayer by thermal curing was also investigated, and a disulfide reduction step was included to fully regenerate the thiol-modified surface function. Fluorescence analysis and water contact angle measurements were used to quantitatively assess the chemical modifications, wettability and quality of modified chip surfaces throughout the silanization, curing and reduction steps. The factorial design enables a systematic approach for the optimization of glass chips silanization process. The optimal conditions for the silanization were incubation of the chips in a 2.5% MPTS solution for 2 h, followed by a curing process at 110 °C for 2 h and a reduction step with 10 mM dithiothreitol for 30 min at 37 °C. For these conditions the surface density of functional thiol groups was 4.9 × 1013 molecules/cm2, which is similar to the expected maximum coverage obtained from the theoretical estimations based on projected molecular area (∼5 × 1013 molecules/cm2).

  1. A hypothesis-driven approach to optimize field campaigns

    NASA Astrophysics Data System (ADS)

    Nowak, Wolfgang; Rubin, Yoram; de Barros, Felipe P. J.

    2012-06-01

    Most field campaigns aim at helping in specified scientific or practical tasks, such as modeling, prediction, optimization, or management. Often these tasks involve binary decisions or seek answers to yes/no questions under uncertainty, e.g., Is a model adequate? Will contamination exceed a critical level? In this context, the information needs of hydro(geo)logical modeling should be satisfied with efficient and rational field campaigns, e.g., because budgets are limited. We propose a new framework to optimize field campaigns that defines the quest for defensible decisions as the ultimate goal. The key steps are to formulate yes/no questions under uncertainty as Bayesian hypothesis tests, and then use the expected failure probability of hypothesis testing as objective function. Our formalism is unique in that it optimizes field campaigns for maximum confidence in decisions on model choice, binary engineering or management decisions, or questions concerning compliance with environmental performance metrics. It is goal oriented, recognizing that different models, questions, or metrics deserve different treatment. We use a formal Bayesian scheme called PreDIA, which is free of linearization, and can handle arbitrary data types, scientific tasks, and sources of uncertainty (e.g., conceptual, physical, (geo)statistical, measurement errors). This reduces the bias due to possibly subjective assumptions prior to data collection and improves the chances of successful field campaigns even under conditions of model uncertainty. We illustrate our approach on two instructive examples from stochastic hydrogeology with increasing complexity.

  2. Wireless Sensing, Monitoring and Optimization for Campus-Wide Steam Distribution

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Woodworth, Ken; Lake, Joe E

    2011-11-01

    The US Congress has passed legislation dictating that all government agencies establish a plan and process for improving energy efficiencies at their sites. In response to this legislation, Oak Ridge National Laboratory (ORNL) has recently conducted a pilot study to explore the deployment of a wireless sensor system for a real-time measurement-based energy efficiency optimization. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize energy delivery within the steam distribution system. Our approach leverages an integrated wireless sensor and real-time monitoring capability. We make real time state assessment on the steam trap health and steam flow estimate of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observing measurements of these sensors with state estimators for system health. Our assessments are based on a spectral-based energy signature scheme that interprets acoustic vibration sensor data to estimate steam flow rates and assess steam traps status. Experimental results show that the energy signature scheme has the potential to identify different steam trap states and it has sufficient sensitivity to estimate flow rate. Moreover, results indicate a nearly quadratic relationship over the test region between the overall energy signature factor and flow rate in the pipe. We are able to present the steam flow and steam trap status, sensor readings, and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. The goal is to achieve significant energy-saving in steam lines by monitoring and acting on leaking steam pipes/traps/valves. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  3. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  4. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  5. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  6. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    NASA Astrophysics Data System (ADS)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  7. Multiagent Task Coordination Using a Distributed Optimization Approach

    DTIC Science & Technology

    2015-09-01

    Vehicles”, 2015 SIAM Conference on Computational Science and Engineering, one of 12 invited student chapter presentations worldwide, Salt Lake City, Utah...Engineering and Computer Science , University of Central Florida, Orlando, FL 32816, USA. G. Staskevich and B. Abbe are with AFRL/RISC, Rome, NY, 13441. T...University 1Tianyu Yang and Junzhen Shao are with the department of Electrical Engineering and Computer Science , Embry-Riddle Aeronautical University

  8. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  9. Hybrid oil film approach to measuring skin friction distribution

    NASA Astrophysics Data System (ADS)

    Kurita, Mitsuru; Iijima, Hidetoshi

    2017-05-01

    This paper describes a technique for quantitatively measuring the time-averaged skin friction distribution on a wind tunnel test model. The technique is a hybrid oil film approach that is based on the combination of the qualitative skin friction distribution obtained from luminescent oil film and the quantitative local skin friction measurements obtained from oil film interferometry in another blowing of the same flow condition. To demonstrate its validity, the proposed method was applied to the flow field around a vortex generator on a flat plate, and successfully measured the quantitative skin friction distribution.

  10. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  11. How do Chinese cities grow? A distribution dynamics approach

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Xin; He, Ling-Yun

    2017-03-01

    This paper examines the dynamic behavior of city size using a distribution dynamics approach with Chinese city data for the period 1984-2010. Instead of convergence, divergence or paralleled growth, multimodality and persistence are the dominant characteristics in the distribution dynamics of Chinese prefectural cities. Moreover, initial city size matters, initially small and medium-sized cities exhibit strong tendency of convergence, while large cities show significant persistence and multimodality in the sample period. Examination on the regional city groups shows that locational fundamentals have important impact on the distribution dynamics of city size.

  12. Evaluation of droplet size distributions using univariate and multivariate approaches.

    PubMed

    Gaunø, Mette Høg; Larsen, Crilles Casper; Vilhelmsen, Thomas; Møller-Sonnergaard, Jørn; Wittendorff, Jørgen; Rantanen, Jukka

    2013-01-01

    Pharmaceutically relevant material characteristics are often analyzed based on univariate descriptors instead of utilizing the whole information available in the full distribution. One example is droplet size distribution, which is often described by the median droplet size and the width of the distribution. The current study was aiming to compare univariate and multivariate approach in evaluating droplet size distributions. As a model system, the atomization of a coating solution from a two-fluid nozzle was investigated. The effect of three process parameters (concentration of ethyl cellulose in ethanol, atomizing air pressure, and flow rate of coating solution) on the droplet size and droplet size distribution using a full mixed factorial design was used. The droplet size produced by a two-fluid nozzle was measured by laser diffraction and reported as volume based size distribution. Investigation of loading and score plots from principal component analysis (PCA) revealed additional information on the droplet size distributions and it was possible to identify univariate statistics (volume median droplet size), which were similar, however, originating from varying droplet size distributions. The multivariate data analysis was proven to be an efficient tool for evaluating the full information contained in a distribution.

  13. The determination and optimization of (rutile) pigment particle size distributions

    NASA Technical Reports Server (NTRS)

    Richards, L. W.

    1972-01-01

    A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.

  14. Optimal shield mass distribution for space radiation protection

    NASA Technical Reports Server (NTRS)

    Billings, M. P.

    1972-01-01

    Computational methods have been developed and successfully used for determining the optimum distribution of space radiation shielding on geometrically complex space vehicles. These methods have been incorporated in computer program SWORD for dose evaluation in complex geometry, and iteratively calculating the optimum distribution for (minimum) shield mass satisfying multiple acute and protected dose constraints associated with each of several body organs.

  15. The determination and optimization of (rutile) pigment particle size distributions

    NASA Technical Reports Server (NTRS)

    Richards, L. W.

    1972-01-01

    A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.

  16. Optimal source distribution for focal boosts using high dose rate (HDR) brachytherapy alone in prostate cancer.

    PubMed

    Dankulchai, Pittaya; Alonzi, Roberto; Lowe, Gerry J; Burnley, James; Padhani, Anwar R; Hoskin, Peter J

    2014-10-01

    To investigate the optimal distribution of sources using high dose rate brachytherapy to deliver a focal boost to a dominant lesion within the whole prostate gland based on multi-parametric magnetic resonance imaging (mpMRI). Sixteen patients with prostate cancer underwent mpMRI each of which demonstrated a dominant lesion. There were single lesions in 6 patients, two lesions in 4 and 3 lesions in 6 patients. Two dosimetric models and parameters were compared in each case. The first model used 10mm intervals between needles, and the second model used additional needles at 5 mm intervals between each needle in the boost area. Three of thirty-two plans did not achieve the plan objectives. These three plans were in the first model. A higher median urethral volume was seen in the 'unsuccessful' group (2.7 cc, and 1.9 cc, respectively, p-value=0.12). Conformity indices of the second model were also better than the first model (COIN index; 0.716 and 0.643, respectively). Focal monotherapy based on mpMRI achieves optimal dosimetry by individualizing the needle positions using 5mm spacing rather than 10mm spacing within the boost volume. A larger urethral volume may have an adverse effect on this distribution. Formal clinical evaluation of this approach is currently underway. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Collaborative Distributed Scheduling Approaches for Wireless Sensor Network

    PubMed Central

    Niu, Jianjun; Deng, Zhidong

    2009-01-01

    Energy constraints restrict the lifetime of wireless sensor networks (WSNs) with battery-powered nodes, which poses great challenges for their large scale application. In this paper, we propose a family of collaborative distributed scheduling approaches (CDSAs) based on the Markov process to reduce the energy consumption of a WSN. The family of CDSAs comprises of two approaches: a one-step collaborative distributed approach and a two-step collaborative distributed approach. The approaches enable nodes to learn the behavior information of its environment collaboratively and integrate sleep scheduling with transmission scheduling to reduce the energy consumption. We analyze the adaptability and practicality features of the CDSAs. The simulation results show that the two proposed approaches can effectively reduce nodes' energy consumption. Some other characteristics of the CDSAs like buffer occupation and packet delay are also analyzed in this paper. We evaluate CDSAs extensively on a 15-node WSN testbed. The test results show that the CDSAs conserve the energy effectively and are feasible for real WSNs. PMID:22408491

  18. Feedback optimal control of distributed parameter systems by using finite-dimensional approximation schemes.

    PubMed

    Alessandri, Angelo; Gaggero, Mauro; Zoppoli, Riccardo

    2012-06-01

    Optimal control for systems described by partial differential equations is investigated by proposing a methodology to design feedback controllers in approximate form. The approximation stems from constraining the control law to take on a fixed structure, where a finite number of free parameters can be suitably chosen. The original infinite-dimensional optimization problem is then reduced to a mathematical programming one of finite dimension that consists in optimizing the parameters. The solution of such a problem is performed by using sequential quadratic programming. Linear combinations of fixed and parameterized basis functions are used as the structure for the control law, thus giving rise to two different finite-dimensional approximation schemes. The proposed paradigm is general since it allows one to treat problems with distributed and boundary controls within the same approximation framework. It can be applied to systems described by either linear or nonlinear elliptic, parabolic, and hyperbolic equations in arbitrary multidimensional domains. Simulation results obtained in two case studies show the potentials of the proposed approach as compared with dynamic programming.

  19. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time

  20. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  1. Optimal Resource Placement in a Distributed System. (Extended Abstract).

    DTIC Science & Technology

    1980-08-01

    edge in the desig- nated direction (towards or away from the root). Because we are looking only at trees, a necessary and sufficient condition for a...leaves. In fact, this is not strictly true unless t is a power of 2. The structure of optimal placement for non -power-of-two numbers of resources is...is an Integer, but there are cases where it Is demonstrably non - optimal . Nevertiless, it is always close to optimial, a tact that is crucial to the

  2. A Heuristic Approach to the Theater Distribution Problem

    DTIC Science & Technology

    2014-03-27

    integer programming model exists to search for optimal solutions to these problems, but it is fairly time consuming, and produces only one of potentially...solutions are compared to those obtained by the integer programming approach. The heuristic models implemented in this research develop feasible...outstanding guidance on this thesis research as well as the introduction to joint mobility modeling in OPER 674 which sparked my interest in this area of

  3. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  4. Open approach for machine diagnostics and process optimization

    NASA Astrophysics Data System (ADS)

    McLeod, C. Stuart; Thomas, David W.; West, Andrew A.; Armstrong, Neal A.

    1997-01-01

    Machine diagnostics and process optimization requires efficient techniques for the real time collection and dissemination of information to enterprise personnel. Open data presentations are required for the diverse software packages used by enterprise personnel, from process modeling and statistical process control to financial and Management Information Systems (MIS) packages. Current systems that enable rapid data collection tend to be vendor specific, point to point applications that are difficult and expensive to update, extend and modify. An open architecture is required that is capable of providing low cost real time collection and dissemination of information to end user applications. The development of an open architecture within the object oriented paradigm to solve a process optimization problem within a packaging organization is described in this paper. The architecture encompasses both the high level data dissemination and low level data storage and communications. A robust communications link between the sensors/intelligent nodes positioned on shop floor machines and the archive/dissemination medium is provided by a fieldbus network. The fieldbus communications link is configurable to allow the periodic sampling/monitoring shop floor data, and high performance collection of data regarding specific processes or events. The data transmission techniques utilized allow the high performance collection of data without disrupting object technology infrastructure. The common object request broker architecture is utilized to provide truly distributed systems for the myriad of applications used by enterprise personnel.

  5. Steady shear flow thermodynamics based on a canonical distribution approach.

    PubMed

    Taniguchi, Tooru; Morriss, Gary P

    2004-11-01

    A nonequilibrium steady-state thermodynamics to describe shear flow is developed using a canonical distribution approach. We construct a canonical distribution for shear flow based on the energy in the moving frame using the Lagrangian formalism of the classical mechanics. From this distribution, we derive the Evans-Hanley shear flow thermodynamics, which is characterized by the first law of thermodynamics dE=TdS-Qdgamma relating infinitesimal changes in energy E, entropy S, and shear rate gamma with kinetic temperature T. Our central result is that the coefficient Q is given by Helfand's moment for viscosity. This approach leads to thermodynamic stability conditions for shear flow, one of which is equivalent to the positivity of the correlation function for Q. We show the consistency of this approach with the Kawasaki distribution function for shear flow, from which a response formula for viscosity is derived in the form of a correlation function for the time-derivative of Q. We emphasize the role of the external work required to sustain the steady shear flow in this approach, and show theoretically that the ensemble average of its power W must be non-negative. A nonequilibrium entropy, increasing in time, is introduced, so that the amount of heat based on this entropy is equal to the average of W. Numerical results from nonequilibrium molecular-dynamics simulation of two-dimensional many-particle systems with soft-core interactions are presented which support our interpretation.

  6. Recent progress in the statistical approach of parton distributions

    SciTech Connect

    Soffer, Jacques

    2011-07-15

    We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.

  7. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  8. OSSA - An optimized approach to severe accident management: EPR application

    SciTech Connect

    Sauvage, E. C.; Prior, R.; Coffey, K.; Mazurkiewicz, S. M.

    2006-07-01

    There is a recognized need to provide nuclear power plant technical staff with structured guidance for response to a potential severe accident condition involving core damage and potential release of fission products to the environment. Over the past ten years, many plants worldwide have implemented such guidance for their emergency technical support center teams either by following one of the generic approaches, or by developing fully independent approaches. There are many lessons to be learned from the experience of the past decade, in developing, implementing, and validating severe accident management guidance. Also, though numerous basic approaches exist which share common principles, there are differences in the methodology and application of the guidelines. AREVA/Framatome-ANP is developing an optimized approach to severe accident management guidance in a project called OSSA ('Operating Strategies for Severe Accidents'). There are still numerous operating power plants which have yet to implement severe accident management programs. For these, the option to use an updated approach which makes full use of lessons learned and experience, is seen as a major advantage. Very few of the current approaches covers all operating plant states, including shutdown states with the primary system closed and open. Although it is not necessary to develop an entirely new approach in order to add this capability, the opportunity has been taken to develop revised full scope guidance covering all plant states in addition to the fuel in the fuel building. The EPR includes at the design phase systems and measures to minimize the risk of severe accident and to mitigate such potential scenarios. This presents a difference in comparison with existing plant, for which severe accidents where not considered in the design. Thought developed for all type of plants, OSSA will also be applied on the EPR, with adaptations designed to take into account its favourable situation in that field

  9. Sentence comprehension: A parallel distributed processing approach. Technical report

    SciTech Connect

    McClelland, J.L.; St John, M.; Taraban, R.

    1989-07-14

    Basic aspects are reviewed of conventional approaches to sentence comprehension and point out are some of the difficulties faced by models that take these approaches. An alternative approach is described, based on the principles of parallel distributed processing, and shown how it offers different answers to basic questions about the nature of the language processing mechanism. An illustrative simulation model captures the key characteristics of the approach, and illustrates how it can cope with the difficulties faced by conventional models. Alternative ways of conceptualizing basic aspects of language processing within the framework of this approach will consider how it can address several arguments that might be brought to bear against it, and suggest avenues for future development.

  10. Use of marginal distributions constrained optimization (MADCO) for accelerated 2D MRI relaxometry and diffusometry

    NASA Astrophysics Data System (ADS)

    Benjamini, Dan; Basser, Peter J.

    2016-10-01

    Measuring multidimensional (e.g., 2D) relaxation spectra in NMR and MRI clinical applications is a holy grail of the porous media and biomedical MR communities. The main bottleneck is the inversion of Fredholm integrals of the first kind, an ill-conditioned problem requiring large amounts of data to stabilize a solution. We suggest a novel experimental design and processing framework to accelerate and improve the reconstruction of such 2D spectra that uses a priori information from the 1D projections of spectra, or marginal distributions. These 1D marginal distributions provide powerful constraints when 2D spectra are reconstructed, and their estimation requires an order of magnitude less data than a conventional 2D approach. This marginal distributions constrained optimization (MADCO) methodology is demonstrated here with a polyvinylpyrrolidone-water phantom that has 3 distinct peaks in the 2D D-T1 space. The stability, sensitivity to experimental parameters, and accuracy of this new approach are compared with conventional methods by serially subsampling the full data set. While the conventional, unconstrained approach performed poorly, the new method had proven to be highly accurate and robust, only requiring a fraction of the data. Additionally, synthetic T1 -T2 data are presented to explore the effects of noise on the estimations, and the performance of the proposed method with a smooth and realistic 2D spectrum. The proposed framework is quite general and can also be used with a variety of 2D MRI experiments (D-T2,T1 -T2, D -D, etc.), making these potentially feasible for preclinical and even clinical applications for the first time.

  11. A generalized computationally efficient inverse characterization approach combining direct inversion solution initialization with gradient-based optimization

    NASA Astrophysics Data System (ADS)

    Wang, Mengyu; Brigham, John C.

    2017-03-01

    A computationally efficient gradient-based optimization approach for inverse material characterization from incomplete system response measurements that can utilize a generally applicable parameterization (e.g., finite element-type parameterization) is presented and evaluated. The key to this inverse characterization algorithm is the use of a direct inversion strategy with Gappy proper orthogonal decomposition (POD) response field estimation to initialize the inverse solution estimate prior to gradient-based optimization. Gappy POD is used to estimate the complete (i.e., all components over the entire spatial domain) system response field from incomplete (e.g., partial spatial distribution) measurements obtained from some type of system testing along with some amount of a priori information regarding the potential distribution of the unknown material property. The estimated complete system response is used within a physics-based direct inversion procedure with a finite element-type parameterization to estimate the spatial distribution of the desired unknown material property with minimal computational expense. Then, this estimated spatial distribution of the unknown material property is used to initialize a gradient-based optimization approach, which uses the adjoint method for computationally efficient gradient calculations, to produce the final estimate of the material property distribution. The three-step [(1) Gappy POD, (2) direct inversion, and (3) gradient-based optimization] inverse characterization approach is evaluated through simulated test problems based on the characterization of elastic modulus distributions with localized variations (e.g., inclusions) within simple structures. Overall, this inverse characterization approach is shown to efficiently and consistently provide accurate inverse characterization estimates for material property distributions from incomplete response field measurements. Moreover, the solution procedure is shown to be capable

  12. Differential Evolution Optimization of the SAR Distribution for Head and Neck Hyperthermia.

    PubMed

    Cappiello, G; McGinley, B; Elahi, M A; Drizdal, T; Paulides, M M; Glavin, M; O'Halloran, M; Jones, E

    2017-08-01

    Hyperthermia is an emerging cancer treatment modality, which involves applying heat to the malignant tumor. The heating can be delivered using electromagnetic (EM) energy, mostly in the radiofrequency (RF) or microwave range. Accurate patient-specific hyperthermia treatment planning (HTP) is essential for effective and safe treatments, in particular, for deep and loco-regional hyperthermia. An important aspect of HTP is the ability to focus microwave energy into the tumor and reduce the occurrence of hot spots in healthy tissue. This paper presents a method for optimizing the specific absorption rate (SAR) distribution for the head and neck cancer hyperthermia treatment. The SAR quantifies the rate at which localized RF or microwave energy is absorbed by the biological tissue when exposed to an EM field. A differential evolution (DE) optimization algorithm is proposed in order to improve the SAR coverage of the target region. The efficacy of the proposed algorithm is demonstrated by testing with the Erasmus MC patient dataset. DE is compared to the particle swarm optimization (PSO) method, in terms of average performance and standard deviation and across various clinical metrics, such as the hot-spot-tumor SAR quotient (HTQ), treatment quantifiers, and temperature parameters. While hot spots in the SAR distribution remain a problem with current approaches, DE enhances focusing microwave energy absorption to the target region during hyperthermia treatment. In particular, DE offers improved performance compared to the PSO algorithm currently deployed in the clinic, reporting a range of improvement of HTQ standard deviation of between 40.1-96.8% across six patients.

  13. A normative inference approach for optimal sample sizes in decisions from experience.

    PubMed

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

  14. Three-dimensional electrical impedance tomography: a topology optimization approach.

    PubMed

    Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli

    2008-02-01

    Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.

  15. A Direct Method for Fuel Optimal Maneuvers of Distributed Spacecraft in Multiple Flight Regimes

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Cooley, D. S.; Guzman, Jose J.

    2005-01-01

    We present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a non-linear dynamics model and parameterize the problem to allow the method to be applicable to multiple flight regimes including low-Earth orbits, highly-elliptic orbits (HEO), Lagrange point orbits, and interplanetary trajectories. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as large constellations. Semianalytical derivatives are derived for the changes in the total AV with respect to changes in the independent variables. We also apply a set of constraints to ensure that the fuel expenditure is equalized over the spacecraft in formation. We conclude with several examples and present optimal maneuver sequences for both a HE0 and libration point formation.

  16. Mapping the distribution of malaria: current approaches and future directions

    USGS Publications Warehouse

    Johnson, Leah R.; Lafferty, Kevin D.; McNally, Amy; Mordecai, Erin A.; Paaijmans, Krijn P.; Pawar, Samraat; Ryan, Sadie J.; Chen, Dongmei; Moulin, Bernard; Wu, Jianhong

    2015-01-01

    Mapping the distribution of malaria has received substantial attention because the disease is a major source of illness and mortality in humans, especially in developing countries. It also has a defined temporal and spatial distribution. The distribution of malaria is most influenced by its mosquito vector, which is sensitive to extrinsic environmental factors such as rainfall and temperature. Temperature also affects the development rate of the malaria parasite in the mosquito. Here, we review the range of approaches used to model the distribution of malaria, from spatially explicit to implicit, mechanistic to correlative. Although current methods have significantly improved our understanding of the factors influencing malaria transmission, significant gaps remain, particularly in incorporating nonlinear responses to temperature and temperature variability. We highlight new methods to tackle these gaps and to integrate new data with models.

  17. An analytic approach to optimize tidal turbine fields

    NASA Astrophysics Data System (ADS)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  18. Approaches of Russian oil companies to optimal capital structure

    NASA Astrophysics Data System (ADS)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  19. An optimization approach and its application to compare DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying

    2015-02-01

    Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.

  20. Frost Formation: Optimizing solutions under a finite volume approach

    NASA Astrophysics Data System (ADS)

    Bartrons, E.; Perez-Segarra, C. D.; Oliet, C.

    2016-09-01

    A three-dimensional transient formulation of the frost formation process is developed by means of a finite volume approach. Emphasis is put on the frost surface boundary condition as well as the wide range of empirical correlations related to the thermophysical and transport properties of frost. A study of the numerical solution is made, establishing the parameters that ensure grid independence. Attention is given to the algorithm, the discretised equations and the code optimization through dynamic relaxation techniques. A critical analysis of four cases is carried out by comparing solutions of several empirical models against tested experiments. As a result, a discussion on the performance of such parameters is started and a proposal of the most suitable models is presented.

  1. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  2. Model reduction for chemical kinetics: An optimization approach

    SciTech Connect

    Petzold, L.; Zhu, W.

    1999-04-01

    The kinetics of a detailed chemically reacting system can potentially be very complex. Although the chemist may be interested in only a few species, the reaction model almost always involves a much larger number of species. Some of those species are radicals, which are very reactive species and can be important intermediaries in the reaction scheme. A large number of elementary reactions can occur among the species; some of these reactions are fast and some are slow. The aim of simplified kinetics modeling is to derive the simplest reaction system which retains the essential features of the full system. An optimization-based method for reduction of the number of species and reactions in chemical kinetics model is described. Numerical results for several reaction mechanisms illustrate the potential of this approach.

  3. A simple optimization approach for improving target dose homogeneity in intensity-modulated radiotherapy for sinonasal cancer

    PubMed Central

    Lu, Jia-Yang; Zhang, Ji-Yong; Li, Mei; Cheung, Michael Lok-Man; Li, Yang-Kang; Zheng, Jing; Huang, Bao-Tian; Zhang, Wu-Zhe

    2015-01-01

    Homogeneous target dose distribution in intensity-modulated radiotherapy (IMRT) for sinonasal cancer (SNC) is challenging to achieve. To solve this problem, we established and evaluated a basal-dose-compensation (BDC) optimization approach, in which the treatment plan is further optimized based on the initial plans. Generally acceptable initial IMRT plans for thirteen patients were created and further optimized individually by (1) the BDC approach and (2) a local-dose-control (LDC) approach, in which the initial plan is further optimized by addressing hot and cold spots. We compared the plan qualities, total planning time and monitor units (MUs) among the initial, BDC, LDC IMRT plans and volumetric modulated arc therapy (VMAT) plans. The BDC approach provided significantly superior dose homogeneity/conformity by 23%–48%/6%–9% compared with both the initial and LDC IMRT plans, as well as reduced doses to the organs at risk (OARs) by up to 18%, with acceptable MU numbers. Compared with VMAT, BDC IMRT yielded superior homogeneity, inferior conformity and comparable overall OAR sparing. The planning of BDC, LDC IMRT and VMAT required 30, 59 and 58 minutes on average, respectively. Our results indicated that the BDC optimization approach can achieve significantly better dose distributions with shorter planning time in the IMRT for SNC. PMID:26497620

  4. A simple optimization approach for improving target dose homogeneity in intensity-modulated radiotherapy for sinonasal cancer.

    PubMed

    Lu, Jia-Yang; Zhang, Ji-Yong; Li, Mei; Cheung, Michael Lok-Man; Li, Yang-Kang; Zheng, Jing; Huang, Bao-Tian; Zhang, Wu-Zhe

    2015-10-26

    Homogeneous target dose distribution in intensity-modulated radiotherapy (IMRT) for sinonasal cancer (SNC) is challenging to achieve. To solve this problem, we established and evaluated a basal-dose-compensation (BDC) optimization approach, in which the treatment plan is further optimized based on the initial plans. Generally acceptable initial IMRT plans for thirteen patients were created and further optimized individually by (1) the BDC approach and (2) a local-dose-control (LDC) approach, in which the initial plan is further optimized by addressing hot and cold spots. We compared the plan qualities, total planning time and monitor units (MUs) among the initial, BDC, LDC IMRT plans and volumetric modulated arc therapy (VMAT) plans. The BDC approach provided significantly superior dose homogeneity/conformity by 23%-48%/6%-9% compared with both the initial and LDC IMRT plans, as well as reduced doses to the organs at risk (OARs) by up to 18%, with acceptable MU numbers. Compared with VMAT, BDC IMRT yielded superior homogeneity, inferior conformity and comparable overall OAR sparing. The planning of BDC, LDC IMRT and VMAT required 30, 59 and 58 minutes on average, respectively. Our results indicated that the BDC optimization approach can achieve significantly better dose distributions with shorter planning time in the IMRT for SNC.

  5. Using tailored methodical approaches to achieve optimal science outcomes

    NASA Astrophysics Data System (ADS)

    Wingate, Lory M.

    2016-08-01

    The science community is actively engaged in research, development, and construction of instrumentation projects that they anticipate will lead to new science discoveries. There appears to be very strong link between the quality of the activities used to complete these projects, and having a fully functioning science instrument that will facilitate these investigations.[2] The combination of using internationally recognized standards within the disciplines of project management (PM) and systems engineering (SE) has been demonstrated to lead to achievement of positive net effects and optimal project outcomes. Conversely, unstructured, poorly managed projects will lead to unpredictable, suboptimal project outcomes ultimately affecting the quality of the science that can be done with the new instruments. The proposed application of these two specific methodical approaches, implemented as a tailorable suite of processes, are presented in this paper. Project management (PM) is accepted worldwide as an effective methodology used to control project cost, schedule, and scope. Systems engineering (SE) is an accepted method that is used to ensure that the outcomes of a project match the intent of the stakeholders, or if they diverge, that the changes are understood, captured, and controlled. An appropriate application, or tailoring, of these disciplines can be the foundation upon which success in projects that support science can be optimized.

  6. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  7. An improved ant colony optimization approach for optimization of process planning.

    PubMed

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach.

  8. Efficient network meta-analysis: a confidence distribution approach*

    PubMed Central

    Yang, Guang; Liu, Dungang; Liu, Regina Y.; Xie, Minge; Hoaglin, David C.

    2014-01-01

    Summary Network meta-analysis synthesizes several studies of multiple treatment comparisons to simultaneously provide inference for all treatments in the network. It can often strengthen inference on pairwise comparisons by borrowing evidence from other comparisons in the network. Current network meta-analysis approaches are derived from either conventional pairwise meta-analysis or hierarchical Bayesian methods. This paper introduces a new approach for network meta-analysis by combining confidence distributions (CDs). Instead of combining point estimators from individual studies in the conventional approach, the new approach combines CDs which contain richer information than point estimators and thus achieves greater efficiency in its inference. The proposed CD approach can e ciently integrate all studies in the network and provide inference for all treatments even when individual studies contain only comparisons of subsets of the treatments. Through numerical studies with real and simulated data sets, the proposed approach is shown to outperform or at least equal the traditional pairwise meta-analysis and a commonly used Bayesian hierarchical model. Although the Bayesian approach may yield comparable results with a suitably chosen prior, it is highly sensitive to the choice of priors (especially the prior of the between-trial covariance structure), which is often subjective. The CD approach is a general frequentist approach and is prior-free. Moreover, it can always provide a proper inference for all the treatment effects regardless of the between-trial covariance structure. PMID:25067933

  9. Communication Optimizations for a Wireless Distributed Prognostic Framework

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2009-01-01

    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.

  10. Optimal expansion of water quality monitoring network by fuzzy optimization approach.

    PubMed

    Ning, Shu-Kuang; Chang, Ni-Bin

    2004-02-01

    River reaches are frequently classified with respect to various mode of water utilization depending on the quantity and quality of water resources available at different location. Monitoring of water quality in a river system must collect both temporal and spatial information for comparison with respect to the preferred situation of a water body based on different scenarios. Designing a technically sound monitoring network, however, needs to identify a suite of significant planning objectives and consider a series of inherent limitations simultaneously. It would rely on applying an advanced systems analysis technique via an integrated simulation-optimization approach to meet the ultimate goal. This article presents an optimal expansion strategy of water quality monitoring stations for fulfilling a long-term monitoring mission under an uncertain environment. The planning objectives considered in this analysis are to increase the protection degree in the proximity of the river system with higher population density, to enhance the detection capability for lower compliance areas, to promote the detection sensitivity by better deployment and installation of monitoring stations, to reflect the levels of utilization potential of water body at different locations, and to monitor the essential water quality in the upper stream areas of all water intakes. The constraint set contains the limitations of budget, equity implication, and the detection sensitivity in the water environment. A fuzzy multi-objective evaluation framework that reflects the uncertainty embedded in decision making is designed for postulating and analyzing the underlying principles of optimal expansion strategy of monitoring network. The case study being organized in South Taiwan demonstrates a set of more robust and flexible expansion alternatives in terms of spatial priority. Such an approach uniquely indicates the preference order of each candidate site to be expanded step-wise whenever the budget

  11. Distributed Generation Dispatch Optimization under VariousElectricity Tariffs

    SciTech Connect

    Firestone, Ryan; Marnay, Chris

    2007-05-01

    The on-site generation of electricity can offer buildingowners and occupiers financial benefits as well as social benefits suchas reduced grid congestion, improved energy efficiency, and reducedgreenhouse gas emissions. Combined heat and power (CHP), or cogeneration,systems make use of the waste heat from the generator for site heatingneeds. Real-time optimal dispatch of CHP systems is difficult todetermine because of complicated electricity tariffs and uncertainty inCHP equipment availability, energy prices, and system loads. Typically,CHP systems use simple heuristic control strategies. This paper describesa method of determining optimal control in real-time and applies it to alight industrial site in San Diego, California, to examine: 1) the addedbenefit of optimal over heuristic controls, 2) the price elasticity ofthe system, and 3) the site-attributable greenhouse gas emissions, allunder three different tariff structures. Results suggest that heuristiccontrols are adequate under the current tariff structure and relativelyhigh electricity prices, capturing 97 percent of the value of thedistributed generation system. Even more value could be captured bysimply not running the CHP system during times of unusually high naturalgas prices. Under hypothetical real-time pricing of electricity,heuristic controls would capture only 70 percent of the value ofdistributed generation.

  12. Optimal design of pump-and-treat systems under uncertain hydraulic conductivity and plume distribution.

    PubMed

    Baú, Domenico A; Mayer, Alex S

    2008-08-20

    In this work, we present a stochastic optimal control framework for assisting the management of the cleanup by pump-and-treat of polluted shallow aquifers. In the problem being investigated, hydraulic conductivity distribution and dissolved contaminant plume location are considered as the uncertain variables. The framework considers the subdivision of the cleanup horizon in a number of stress periods over which the pumping policy implemented until that stage is dynamically adjusted based upon new information that has become available in the previous stages. In particular, by following a geostatistical approach, we study the idea of monitoring the cumulative contaminant mass extracted from the installed recovery wells, and using these measurements to generate conditional realizations of the hydraulic conductivity field. These realizations are thus used to obtain a more accurate evaluation of the initial plume distribution, and modify accordingly the design of the pump-and-treat system for the remainder of the remedial process. The study indicates that measurements of contaminant mass extracted from pumping wells retain valuable information about the plume location and the spatial heterogeneity characterizing the hydraulic conductivity field. However, such an information may prove quite soft, particularly in the instances where recovery wells are installed in regions where contaminant concentration is low or zero. On the other hand, integrated solute mass measurements may effectively allow for reducing parameter uncertainty and identifying the plume distribution if more recovery wells are available, in particular in the early stages of the cleanup process.

  13. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  14. All-in-one model for designing optimal water distribution pipe networks

    NASA Astrophysics Data System (ADS)

    Aklog, Dagnachew; Hosoi, Yoshihiko

    2017-05-01

    This paper discusses the development of an easy-to-use, all-in-one model for designing optimal water distribution networks. The model combines different optimization techniques into a single package in which a user can easily choose what optimizer to use and compare the results of different optimizers to gain confidence in the performances of the models. At present, three optimization techniques are included in the model: linear programming (LP), genetic algorithm (GA) and a heuristic one-by-one reduction method (OBORM) that was previously developed by the authors. The optimizers were tested on a number of benchmark problems and performed very well in terms of finding optimal or near-optimal solutions with a reasonable computation effort. The results indicate that the model effectively addresses the issues of complexity and limited performance trust associated with previous models and can thus be used for practical purposes.

  15. Towards a globally optimized crop distribution: Integrating water use, nutrition, and economic value

    NASA Astrophysics Data System (ADS)

    Davis, K. F.; Seveso, A.; Rulli, M. C.; D'Odorico, P.

    2016-12-01

    Human demand for crop production is expected to increase substantially in the coming decades as a result of population growth, richer diets and biofuel use. In order for food production to keep pace, unprecedented amounts of resources - water, fertilizers, energy - will be required. This has led to calls for `sustainable intensification' in which yields are increased on existing croplands while seeking to minimize impacts on water and other agricultural resources. Recent studies have quantified aspects of this, showing that there is a large potential to improve crop yields and increase harvest frequencies to better meet human demand. Though promising, both solutions would necessitate large additional inputs of water and fertilizer in order to be achieved under current technologies. However, the question of whether the current distribution of crops is, in fact, the best for realizing sustainable production has not been considered to date. To this end, we ask: Is it possible to increase crop production and economic value while minimizing water demand by simply growing crops where soil and climate conditions are best suited? Here we use maps of yields and evapotranspiration for 14 major food crops to identify differences between current crop distributions and where they can most suitably be planted. By redistributing crops across currently cultivated lands, we determine the potential improvements in calorie (+12%) and protein (+51%) production, economic output (+41%) and water demand (-5%). This approach can also incorporate the impact of future climate on cropland suitability, and as such, be used to provide optimized cropping patterns under climate change. Thus, our study provides a novel tool towards achieving sustainable intensification that can be used to recommend optimal crop distributions in the face of a changing climate while simultaneously accounting for food security, freshwater resources, and livelihoods.

  16. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  17. Ant Colony Optimization Algorithm for Continuous Domains Based on Position Distribution Model of Ant Colony Foraging

    PubMed Central

    Liu, Liqiang; Dai, Yuntao

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402

  18. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    PubMed

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm.

  19. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  20. Greek Electoral System: Optimal Distribution of the Seats

    NASA Astrophysics Data System (ADS)

    Tsitouras, Ch.

    2007-09-01

    The Greek parliamentary elections of 2008 and 2012 will take place according to the electoral low which had been voted by the previous house back in 2004. The parties receive a nation-wide number of seats that have to be distributed in the prefectures. It is a transportation problem where the legislator neglected its complete solution after finding a first random feasible solution.

  1. Optimizing Distributed Practice: Theoretical Analysis and Practical Implications

    ERIC Educational Resources Information Center

    Cepeda, Nicholas J.; Coburn, Noriko; Rohrer, Doug; Wixted, John T.; Mozer, Michael C,; Pashler, Harold

    2009-01-01

    More than a century of research shows that increasing the gap between study episodes using the same material can enhance retention, yet little is known about how this so-called distributed practice effect unfolds over nontrivial periods. In two three-session laboratory studies, we examined the effects of gap on retention of foreign vocabulary,…

  2. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  3. Multi-objective optimal power flow for active distribution network considering the stochastic characteristic of photovoltaic

    NASA Astrophysics Data System (ADS)

    Zhou, Bao-Rong; Liu, Si-Liang; Zhang, Yong-Jun; Yi, Ying-Qi; Lin, Xiao-Ming

    2017-05-01

    To mitigate the impact on the distribution networks caused by the stochastic characteristic and high penetration of photovoltaic, a multi-objective optimal power flow model is proposed in this paper. The regulation capability of capacitor, inverter of photovoltaic and energy storage system embedded in active distribution network are considered to minimize the expected value of active power the T loss and probability of voltage violation in this model. Firstly, a probabilistic power flow based on cumulant method is introduced to calculate the value of the objectives. Secondly, NSGA-II algorithm is adopted for optimization to obtain the Pareto optimal solutions. Finally, the best compromise solution can be achieved through fuzzy membership degree method. By the multi-objective optimization calculation of IEEE34-node distribution network, the results show that the model can effectively improve the voltage security and economy of the distribution network on different levels of photovoltaic penetration.

  4. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  5. Distributed Optimization of Multi Beam Directional Communication Networks

    DTIC Science & Technology

    2017-06-30

    algorithm to solve for the optimal flows associated with each beam in the network. For each augmented Lagrangian iteration, we propose a scaled gradient...the power allocated by node i for transmitting data to node j, and its associated channel capacity cij(Pij) = log2(1 + fijPij) bits/s/Hz for a path loss...m ∑ i∈N pi(m)  ∑ j:(j,i)∈E xji(m)− ∑ j:(i,j)∈E xij(m) + si(m)  where p are the Lagrange multipliers associated with the flow of conservation

  6. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    SciTech Connect

    Murton, Jaclyn K.; Hanson, David T.; Turner, Tom; Powell, Amy Jo; James, Scott Carlton; Timlin, Jerilyn Ann; Scholle, Steven; August, Andrew; Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A.

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  7. Approach to optimal care at end of life.

    PubMed

    Nichols, K J

    2001-10-01

    At no other time in any patient's life is the team approach to care more important than at the end of life. The demands and challenges of end-of-life care (ELC) tax all physicians at some point. There is no other profession that is charged with this ultimate responsibility. No discipline in medicine is immune to the issues of end-of-life care except perhaps, ironically, pathology. This presentation addresses the issues, options, and challenges of providing optimal care at the end of life. It looks at the principles of ELC, barriers to good ELC, and what patients and families expect from ELC. Barriers to ELC include financial restrictions, inadequate care-givers, community support, legal issues, legislative issues, training needs, coordination of care, hospice care, and transitions for the patients and families. The legal aspects of physician-assisted suicide is presented as well as the approach of the American Osteopathic Association to ensure better education for physicians in the principles of ELC.

  8. Practical Framework for an Electron Beam Induced Current Technique Based on a Numerical Optimization Approach

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Hideshi; Soeda, Takeshi

    2015-03-01

    A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.

  9. Probabilistic Modeling Approach to Thermoelectric Systems Design Optimization

    SciTech Connect

    Karri, Naveen K.; Hendricks, Terry J.

    2007-06-25

    Recent studies on thermoelectric (TE) systems indicate that the existence of high figure of merit (ZT) materials alone is not sufficient for superior system performance and an integrated system level analysis is necessary to attain such performance. This is because there are numerous design parameters at various levels of the system that are randomly variable in nature that could affect the overall system performance. In this work the effect of stochasticity in design variables at various levels of a TE system has been studied and analyzed to attain optimal design solutions. Starting with stochasticity in one of the environmental variables, a progression was made towards studying the coupled effects of stochasticity in multiple variables at environmental and heat exchanger levels of a thermoelectric generator (TEG) system. Research and analysis tools were developed to incorporate stochasticities in single or multiple variables individually or simultaneously to study both the individual and coupled affects of input design variable stochasticities (probabilities) on output performance variables. Results indicate that normal or Gaussian distribution in input design parameters may not produce Gaussian output parameters. Also when the stochasticities in multiple variables are coupled, the standard deviations in performance parameters are magnified, and their means/averages deviate more from the deterministic values. Although more studies are required to quantify the parameters for design modifications, the studies presented in this paper affirm that incorporating stochastic variability not only aids in understanding the effects of system design variable randomness on expected output performance, but also serves to guide design decisions for optimal TE system design solutions that provide more robust system designs with improved reliability and performance across a range of off-nominal conditions.

  10. Optimization of orthotropic distributed-mode loudspeaker using attached masses and multi-exciters.

    PubMed

    Lu, Guochao; Shen, Yong; Liu, Ziyun

    2012-02-01

    Based on the orthotropic model of the plate, the method to optimize the sound response of the distributed-mode loudspeaker (DML) using the attached masses and the multi-exciters has been investigated. The attached masses method will rebuild the modes distribution of the plate, based on which multi-exciter method will smooth the sound response. The results indicate that the method can be used to optimize the sound response of the DML. © 2012 Acoustical Society of America

  11. Optimizing the sequence of diameter distributions and selection harvests for uneven-aged stand management

    Treesearch

    Robert G. Haight; J. Douglas Brodie; Darius M. Adams

    1985-01-01

    The determination of an optimal sequence of diameter distributions and selection harvests for uneven-aged stand management is formulated as a discrete-time optimal-control problem with bounded control variables and free-terminal point. An efficient programming technique utilizing gradients provides solutions that are stable and interpretable on the basis of economic...

  12. Optimizing Distributed Energy Resources and building retrofits with the strategic DER-CAModel

    SciTech Connect

    Stadler, M.; Groissböck, M.; Cardoso, G.; Marnay, C.

    2014-08-05

    The pressuring need to reduce the import of fossil fuels as well as the need to dramatically reduce CO2 emissions in Europe motivated the European Commission (EC) to implement several regulations directed to building owners. Most of these regulations focus on increasing the number of energy efficient buildings, both new and retrofitted, since retrofits play an important role in energy efficiency. Overall, this initiative results from the realization that buildings will have a significant impact in fulfilling the 20/20/20-goals of reducing the greenhouse gas emissions by 20%, increasing energy efficiency by 20%, and increasing the share of renewables to 20%, all by 2020. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is an optimization tool used to support DER investment decisions, typically by minimizing total annual costs or CO2 emissions while providing energy services to a given building or microgrid site. This document shows enhancements made to DER-CAM to consider building retrofit measures along with DER investment options. Specifically, building shell improvement options have been added to DER-CAM as alternative or complementary options to investments in other DER such as PV, solar thermal, combined heat and power, or energy storage. The extension of the mathematical formulation required by the new features introduced in DER-CAM is presented and the resulting model is demonstrated at an Austrian Campus building by comparing DER-CAM results with and without building shell improvement options. Strategic investment results are presented and compared to the observed investment decision at the test site. Results obtained considering building shell improvement options suggest an optimal weighted average U value of about 0.53 W/(m2K) for the test site. This result is approximately 25% higher than what is currently observed in the building, suggesting that the retrofits made in 2002 were not optimal. Furthermore

  13. Optimizing Distributed Energy Resources and building retrofits with the strategic DER-CAModel

    DOE PAGES

    Stadler, M.; Groissböck, M.; Cardoso, G.; ...

    2014-08-05

    The pressuring need to reduce the import of fossil fuels as well as the need to dramatically reduce CO2 emissions in Europe motivated the European Commission (EC) to implement several regulations directed to building owners. Most of these regulations focus on increasing the number of energy efficient buildings, both new and retrofitted, since retrofits play an important role in energy efficiency. Overall, this initiative results from the realization that buildings will have a significant impact in fulfilling the 20/20/20-goals of reducing the greenhouse gas emissions by 20%, increasing energy efficiency by 20%, and increasing the share of renewables to 20%,more » all by 2020. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is an optimization tool used to support DER investment decisions, typically by minimizing total annual costs or CO2 emissions while providing energy services to a given building or microgrid site. This document shows enhancements made to DER-CAM to consider building retrofit measures along with DER investment options. Specifically, building shell improvement options have been added to DER-CAM as alternative or complementary options to investments in other DER such as PV, solar thermal, combined heat and power, or energy storage. The extension of the mathematical formulation required by the new features introduced in DER-CAM is presented and the resulting model is demonstrated at an Austrian Campus building by comparing DER-CAM results with and without building shell improvement options. Strategic investment results are presented and compared to the observed investment decision at the test site. Results obtained considering building shell improvement options suggest an optimal weighted average U value of about 0.53 W/(m2K) for the test site. This result is approximately 25% higher than what is currently observed in the building, suggesting that the retrofits made in 2002 were not optimal. Furthermore, the results obtained with

  14. Exploring trade-offs between VMAT dose quality and delivery efficiency using a network optimization approach

    NASA Astrophysics Data System (ADS)

    Salari, Ehsan; Wala, Jeremiah; Craft, David

    2012-09-01

    To formulate and solve the fluence-map merging procedure of the recently-published VMAT treatment-plan optimization method, called vmerge, as a bi-criteria optimization problem. Using an exact merging method rather than the previously-used heuristic, we are able to better characterize the trade-off between the delivery efficiency and dose quality. vmerge begins with a solution of the fluence-map optimization problem with 180 equi-spaced beams that yields the ‘ideal’ dose distribution. Neighboring fluence maps are then successively merged, meaning that they are added together and delivered as a single map. The merging process improves the delivery efficiency at the expense of deviating from the initial high-quality dose distribution. We replace the original merging heuristic by considering the merging problem as a discrete bi-criteria optimization problem with the objectives of maximizing the treatment efficiency and minimizing the deviation from the ideal dose. We formulate this using a network-flow model that represents the merging problem. Since the problem is discrete and thus non-convex, we employ a customized box algorithm to characterize the Pareto frontier. The Pareto frontier is then used as a benchmark to evaluate the performance of the standard vmerge algorithm as well as two other similar heuristics. We test the exact and heuristic merging approaches on a pancreas and a prostate cancer case. For both cases, the shape of the Pareto frontier suggests that starting from a high-quality plan, we can obtain efficient VMAT plans through merging neighboring fluence maps without substantially deviating from the initial dose distribution. The trade-off curves obtained by the various heuristics are contrasted and shown to all be equally capable of initial plan simplifications, but to deviate in quality for more drastic efficiency improvements. This work presents a network optimization approach to the merging problem. Contrasting the trade-off curves of the

  15. Optimization of distribution transformer efficiency characteristics. Final report, March 1979

    SciTech Connect

    Not Available

    1980-06-01

    A method for distribution transformer loss evaluation was derived. The total levalized annual cost method was used and was extended to account properly for conditions of energy cost inflation, peak load growth, and transformer changeout during the evaluation period. The loss costs included were the no-load and load power losses, no-load and load reactive losses, and the energy cost of regulation. The demand and energy components of loss costs were treated separately to account correctly for the diversity of load losses and energy cost inflation. The complete distribution transformer loss evaluation equation is shown, with the nomenclature and definitions for the parameters provided. Tasks described are entitled: Establish Loss Evaluation Techniques; Compile System Cost Parameters; Compile Load Parameters and Loading Policies; Develop Transformer Cost/Performance Relationship; Define Characteristics of Multiple Efficiency Transformer Package; Minimize Life Cycle Cost Based on Single Efficiency Characteristic Transformer Design; Minimize Life Cycle Cost Based on Multiple Efficiency Characteristic Transformer Design; and Interpretation.

  16. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can

  17. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in

  18. Curricular policy as a collective effects problem: A distributional approach.

    PubMed

    Penner, Andrew M; Domina, Thurston; Penner, Emily K; Conley, AnneMarie

    2015-07-01

    Current educational policies in the United States attempt to boost student achievement and promote equality by intensifying the curriculum and exposing students to more advanced coursework. This paper investigates the relationship between one such effort - California's push to enroll all 8th grade students in Algebra - and the distribution of student achievement. We suggest that this effort is an instance of a "collective effects" problem, where the population-level effects of a policy are different from its effects at the individual level. In such contexts, we argue that it is important to consider broader population effects as well as the difference between "treated" and "untreated" individuals. To do so, we present differences in inverse propensity score weighted distributions investigating how this curricular policy changed the distribution of student achievement. We find that California's attempt to intensify the curriculum did not raise test scores at the bottom of the distribution, but did lower scores at the top of the distribution. These results highlight the efficacy of inverse propensity score weighting approaches for examining distributional differences, and provide a cautionary tale for curricular intensification efforts and other policies with collective effects. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Optimizing parametrial aperture design utilizing HDR brachytherapy isodose distribution.

    PubMed

    Chapman, Katherine L; Ohri, Nitin; Showalter, Timothy N; Doyle, Laura A

    2013-03-01

    Treatment of cervical cancer includes combination of external beam radiation therapy (EBRT) and brachytherapy (BRT). Traditionally, coronal images displaying dose distribution from a ring and tandem (R&T) implant aid in construction of parametrial boost fields. This research aimed to evaluate a method of shaping parametrial fields utilizing contours created from the high-dose-rate (HDR) BRT dose distribution. Eleven patients receiving HDR-BRT via R&T were identified. The BRT and EBRT CT scans were sent to FocalSim (v4.62)(®) and fused based on bony anatomy. The contour of the HDR isodose line was transferred to the EBRT scan. The EBRT scan was sent to CMS-XIO (v4.62)(®) for planning. This process provides an automated, potentially more accurate method of matching the medial parametrial border to the HDR dose distribution. This allows for a 3D-view of dose from HDR-BRT for clinical decision-making, utilizes a paperless process and saves time over the traditional technique.

  20. Optimizing parametrial aperture design utilizing HDR brachytherapy isodose distribution

    PubMed Central

    Chapman, Katherine L.; Ohri, Nitin; Showalter, Timothy N.

    2013-01-01

    Treatment of cervical cancer includes combination of external beam radiation therapy (EBRT) and brachytherapy (BRT). Traditionally, coronal images displaying dose distribution from a ring and tandem (R&T) implant aid in construction of parametrial boost fields. This research aimed to evaluate a method of shaping parametrial fields utilizing contours created from the high-dose-rate (HDR) BRT dose distribution. Eleven patients receiving HDR-BRT via R&T were identified. The BRT and EBRT CT scans were sent to FocalSim (v4.62)® and fused based on bony anatomy. The contour of the HDR isodose line was transferred to the EBRT scan. The EBRT scan was sent to CMS-XIO (v4.62)® for planning. This process provides an automated, potentially more accurate method of matching the medial parametrial border to the HDR dose distribution. This allows for a 3D-view of dose from HDR-BRT for clinical decision-making, utilizes a paperless process and saves time over the traditional technique. PMID:23634156

  1. A systematic approach to designing reliable VV optimization methodology: Assessment of internal validity of echocardiographic, electrocardiographic and haemodynamic optimization of cardiac resynchronization therapy

    PubMed Central

    Kyriacou, Andreas; Li Kam Wa, Matthew E.; Pabari, Punam A.; Unsworth, Beth; Baruah, Resham; Willson, Keith; Peters, Nicholas S.; Kanagaratnam, Prapa; Hughes, Alun D.; Mayet, Jamil; Whinnett, Zachary I.; Francis, Darrel P.

    2013-01-01

    Background In atrial fibrillation (AF), VV optimization of biventricular pacemakers can be examined in isolation. We used this approach to evaluate internal validity of three VV optimization methods by three criteria. Methods and results Twenty patients (16 men, age 75 ± 7) in AF were optimized, at two paced heart rates, by LVOT VTI (flow), non-invasive arterial pressure, and ECG (minimizing QRS duration). Each optimization method was evaluated for: singularity (unique peak of function), reproducibility of optimum, and biological plausibility of the distribution of optima. The reproducibility (standard deviation of the difference, SDD) of the optimal VV delay was 10 ms for pressure, versus 8 ms (p = ns) for QRS and 34 ms (p < 0.01) for flow. Singularity of optimum was 85% for pressure, 63% for ECG and 45% for flow (Chi2 = 10.9, p < 0.005). The distribution of pressure optima was biologically plausible, with 80% LV pre-excited (p = 0.007). The distributions of ECG (55% LV pre-excitation) and flow (45% LV pre-excitation) optima were no different to random (p = ns). The pressure-derived optimal VV delay is unaffected by the paced rate: SDD between slow and fast heart rate is 9 ms, no different from the reproducibility SDD at both heart rates. Conclusions Using non-invasive arterial pressure, VV delay optimization by parabolic fitting is achievable with good precision, satisfying all 3 criteria of internal validity. VV optimum is unaffected by heart rate. Neither QRS minimization nor LVOT VTI satisfy all validity criteria, and therefore seem weaker candidate modalities for VV optimization. AF, unlinking interventricular from atrioventricular delay, uniquely exposes resynchronization concepts to experimental scrutiny. PMID:22459364

  2. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach.

    PubMed

    Zeng, Xiaozheng; McGough, Robert J

    2009-05-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters.

  3. Tomographic Approach in Three-Orthogonal-Basis Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Liang, Wen-Ye; Wen, Hao; Yin, Zhen-Qiang; Chen, Hua; Li, Hong-Wei; Chen, Wei; Han, Zheng-Fu

    2015-09-01

    At present, there is an increasing awareness of some three-orthogonal-basis quantum key distribution protocols, such as, the reference-frame-independent (RFI) protocol and the six-state protocol. For secure key rate estimations of these protocols, there are two methods: one is the conventional approach, and another is the tomographic approach. However, a comparison between these two methods has not been given yet. In this work, with the general model of rotation channel, we estimate the key rate using conventional and tomographic methods respectively. Results show that conventional estimation approach in RFI protocol is equivalent to tomographic approach only in the case of that one of three orthogonal bases is always aligned. In other cases, tomographic approach performs much better than the respective conventional approaches of the RFI protocol and the six-state protocol. Furthermore, based on the experimental data, we illustrate the deep connections between tomography and conventional RFI approach representations. Supported by the National Basic Research Program of China under Grant Nos. 2011CBA00200 and 2011CB921200 and the National Natural Science Foundation of China under Grant Nos. 60921091, 61475148, and 61201239 and Zhejiang Natural Science Foundation under Grant No. LQ13F050005

  4. Sub-Optimal Ensemble Filters and distributed hydrologic modeling: a new challenge in flood forecasting

    NASA Astrophysics Data System (ADS)

    Baroncini, F.; Castelli, F.

    2009-09-01

    Data assimilation techniques based on Ensemble Filtering are widely regarded as the best approach in solving forecast and calibration problems in geophysics models. Often the implementation of statistical optimal techniques, like the Ensemble Kalman Filter, is unfeasible because of the large amount of replicas used in each time step of the model for updating the error covariance matrix. Therefore the sub optimal approach seems to be a more suitable choice. Various sub-optimal techniques were tested in atmospheric and oceanographic models, some of them are based on the detection of a "null space". Distributed Hydrologic Models differ from the other geo-fluid-dynamics models in some fundamental aspects that make complex to understanding the relative efficiency of the different suboptimal techniques. Those aspects include threshold processes , preferential trajectories for convection and diffusion, low observability of the main state variables and high parametric uncertainty. This research study is focused on such topics and explore them through some numerical experiments on an continuous hydrologic model, MOBIDIC. This model include both water mass balance and surface energy balance, so it's able to assimilate a wide variety of datasets like traditional hydrometric "on ground" measurements or land surface temperature retrieval from satellite. The experiments that we present concern to a basin of 700 kmq in center Italy, with hourly dataset on a 8 months period that includes both drought and flood events, in this first set of experiment we worked on a low spatial resolution version of the hydrologic model (3.2 km). A new Kalman Filter based algorithm is presented : this filter try to address the main challenges of hydrological modeling uncertainty. The proposed filter use in Forecast step a COFFEE (Complementary Orthogonal Filter For Efficient Ensembles) approach with a propagation of both deterministic and stochastic ensembles to improve robustness and convergence

  5. Optimal exploitation of spatially distributed trophic resources and population stability

    USGS Publications Warehouse

    Basset, A.; Fedele, M.; DeAngelis, D.L.

    2002-01-01

    The relationships between optimal foraging of individuals and population stability are addressed by testing, with a spatially explicit model, the effect of patch departure behaviour on individual energetics and population stability. A factorial experimental design was used to analyse the relevance of the behavioural factor in relation to three factors that are known to affect individual energetics; i.e. resource growth rate (RGR), assimilation efficiency (AE), and body size of individuals. The factorial combination of these factors produced 432 cases, and 1000 replicate simulations were run for each case. Net energy intake rates of the modelled consumers increased with increasing RGR, consumer AE, and consumer body size, as expected. Moreover, through their patch departure behaviour, by selecting the resource level at which they departed from the patch, individuals managed to substantially increase their net energy intake rates. Population stability was also affected by the behavioural factors and by the other factors, but with highly non-linear responses. Whenever resources were limiting for the consumers because of low RGR, large individual body size or low AE, population density at the equilibrium was directly related to the patch departure behaviour; on the other hand, optimal patch departure behaviour, which maximised the net energy intake at the individual level, had a negative influence on population stability whenever resource availability was high for the consumers. The consumer growth rate (r) and numerical dynamics, as well as the spatial and temporal fluctuations of resource density, which were the proximate causes of population stability or instability, were affected by the behavioural factor as strongly or even more strongly than by the others factors considered here. Therefore, patch departure behaviour can act as a feedback control of individual energetics, allowing consumers to optimise a potential trade-off between short-term individual fitness

  6. A Rawlsian Approach to Distribute Responsibilities in Networks

    PubMed Central

    2009-01-01

    Due to their non-hierarchical structure, socio-technical networks are prone to the occurrence of the problem of many hands. In the present paper an approach is introduced in which people’s opinions on responsibility are empirically traced. The approach is based on the Rawlsian concept of Wide Reflective Equilibrium (WRE) in which people’s considered judgments on a case are reflectively weighed against moral principles and background theories, ideally leading to a state of equilibrium. Application of the method to a hypothetical case with an artificially constructed network showed that it is possible to uncover the relevant data to assess a consensus amongst people in terms of their individual WRE. It appeared that the moral background theories people endorse are not predictive for their actual distribution of responsibilities but that they indicate ways of reasoning and justifying outcomes. Two ways of ascribing responsibilities were discerned, corresponding to two requirements of a desirable responsibility distribution: fairness and completeness. Applying the method triggered learning effects, both with regard to conceptual clarification and moral considerations, and in the sense that it led to some convergence of opinions. It is recommended to apply the method to a real engineering case in order to see whether this approach leads to an overlapping consensus on a responsibility distribution which is justifiable to all and in which no responsibilities are left unfulfilled, therewith trying to contribute to the solution of the problem of many hands. PMID:19626463

  7. A Rawlsian approach to distribute responsibilities in networks.

    PubMed

    Doorn, Neelke

    2010-06-01

    Due to their non-hierarchical structure, socio-technical networks are prone to the occurrence of the problem of many hands. In the present paper an approach is introduced in which people's opinions on responsibility are empirically traced. The approach is based on the Rawlsian concept of Wide Reflective Equilibrium (WRE) in which people's considered judgments on a case are reflectively weighed against moral principles and background theories, ideally leading to a state of equilibrium. Application of the method to a hypothetical case with an artificially constructed network showed that it is possible to uncover the relevant data to assess a consensus amongst people in terms of their individual WRE. It appeared that the moral background theories people endorse are not predictive for their actual distribution of responsibilities but that they indicate ways of reasoning and justifying outcomes. Two ways of ascribing responsibilities were discerned, corresponding to two requirements of a desirable responsibility distribution: fairness and completeness. Applying the method triggered learning effects, both with regard to conceptual clarification and moral considerations, and in the sense that it led to some convergence of opinions. It is recommended to apply the method to a real engineering case in order to see whether this approach leads to an overlapping consensus on a responsibility distribution which is justifiable to all and in which no responsibilities are left unfulfilled, therewith trying to contribute to the solution of the problem of many hands.

  8. Distributed Energy Resources On-Site Optimization for Commercial Buildings with Electric and Thermal Storage Technologies

    SciTech Connect

    Lacommare, Kristina S H; Stadler, Michael; Aki, Hirohisa; Firestone, Ryan; Lai, Judy; Marnay, Chris; Siddiqui, Afzal

    2008-05-15

    The addition of storage technologies such as flow batteries, conventional batteries, and heat storage can improve the economic as well as environmental attractiveness of on-site generation (e.g., PV, fuel cells, reciprocating engines or microturbines operating with or without CHP) and contribute to enhanced demand response. In order to examine the impact of storage technologies on demand response and carbon emissions, a microgrid's distributed energy resources (DER) adoption problem is formulated as a mixed-integer linear program that has the minimization of annual energy costs as its objective function. By implementing this approach in the General Algebraic Modeling System (GAMS), the problem is solved for a given test year at representative customer sites, such as schools and nursing homes, to obtain not only the level of technology investment, but also the optimal hourly operating schedules. This paper focuses on analysis of storage technologies in DER optimization on a building level, with example applications for commercial buildings. Preliminary analysis indicates that storage technologies respond effectively to time-varying electricity prices, i.e., by charging batteries during periods of low electricity prices and discharging them during peak hours. The results also indicate that storage technologies significantly alter the residual load profile, which can contribute to lower carbon emissions depending on the test site, its load profile, and its adopted DER technologies.

  9. Optimization of tomographic reconstruction workflows on geographically distributed resources

    SciTech Connect

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum

  10. Optimizing Spillway Capacity With an Estimated Distribution of Floods

    NASA Astrophysics Data System (ADS)

    Resendiz-Carrillo, Daniel; Lave, Lester B.

    1987-11-01

    A model of social cost minimizing spillway capacity for dams is constructed using (1) the estimated distribution of peak flows from historical data, (2) the estimated relationship between spillway capacity and cost, and (3) a characterization of downstream flood damage from dam failure. Net social cost is the sum of construction costs and expected flood damage. This model is applied to data for the Rio Grande River at Embudo, New Mexico. Minimum social cost is attained at a spillway capacity much smaller than that needed to handle a probably maximum flood.

  11. Method for computing the optimal signal distribution and channel capacity.

    PubMed

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet.

  12. Query Optimization in Distributed Databases through Load Balancing.

    DTIC Science & Technology

    1986-08-06

    bound solution methods to the 0-1 integer programming problem. Although their approaches were more efficient, they still required about a second of...return to the value that it had before the large increase). We did -.- not carry out sufficiently long measurements to feel confident enough to draw any...easily implemented in such a system, we can be more confident that we can adapt them to less sophisticated DDBMS’s. Furthermore, a lot of effort has been

  13. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  14. Optimal replacement policy for single pipes in water distribution networks

    NASA Astrophysics Data System (ADS)

    Luong, Huynh T.; Nagarur, Nagen N.

    2001-12-01

    In the actual operation of a distribution network, failure can occur in any component of the network, such as pumps, valves, junctions, and pipes. When a component is experiencing failure, the question raised is whether to replace or repair it. For the case of a pipe in the distribution network, which is one of the most frequently subject-to-failure components, there are some aspects that still remain unresolved in relation to practical operations of a maintenance program. In this research paper, a mathematical model is developed which aims to support a decision to repair or replace a main pipe in the state of failure. The objective of the model is to maximize the long-run availability of the pipe under some budget constraints. A semi-Markov process is used to depict the behavior of the pipe, and replacement ages of the pipe in each of its deteriorating stages are taken as the decision variables. The original nonlinear problem resulting from model formulation is converted to a linear problem by some simple transformations, and then numerical experiments are conducted to illustrate the applicability of the proposed model.

  15. A random optimization approach for inherent optic properties of nearshore waters

    NASA Astrophysics Data System (ADS)

    Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng

    2016-10-01

    Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.

  16. Real-time, large scale optimization of water network systems using a subdomain approach.

    SciTech Connect

    van Bloemen Waanders, Bart Gustaaf; Biegler, Lorenz T.; Laird, Carl Damon

    2005-03-01

    Certain classes of dynamic network problems can be modeled by a set of hyperbolic partial differential equations describing behavior along network edges and a set of differential and algebraic equations describing behavior at network nodes. In this paper, we demonstrate real-time performance for optimization problems in drinking water networks. While optimization problems subject to partial differential, differential, and algebraic equations can be solved with a variety of techniques, efficient solutions are difficult for large network problems with many degrees of freedom and variable bounds. Sequential optimization strategies can be inefficient for this problem due to the high cost of computing derivatives with respect to many degrees of freedom. Simultaneous techniques can be more efficient, but are difficult because of the need to solve a large nonlinear program; a program that may be too large for current solver. This study describes a dynamic optimization formulation for estimating contaminant sources in drinking water networks, given concentration measurements at various network nodes. We achieve real-time performance by combining an efficient large-scale nonlinear programming algorithm with two problem reduction techniques. D Alembert's principle can be applied to the partial differential equations governing behavior along the network edges (distribution pipes). This allows us to approximate the time-delay relationships between network nodes, removing the need to discretize along the length of the pipes. The efficiency of this approach alone, however, is still dependent on the size of the network and does not scale indefinitely to larger network models. We further reduce the problem size with a subdomain approach and solve smaller inversion problems using a geographic window around the area of contamination. We illustrate the effectiveness of this overall approach and these reduction techniques on an actual metropolitan water network model.

  17. Curricular Policy as a Collective Effects Problem: A Distributional Approach

    PubMed Central

    Penner, Andrew M.; Domina, Thurston; Penner, Emily K.; Conley, AnneMarie

    2015-01-01

    Current educational policies in the United States attempt to boost student achievement and promote equality by intensifying the curriculum and exposing students to more advanced coursework. This paper investigates the relationship between one such effort -- California's push to enroll all 8th grade students in Algebra -- and the distribution of student achievement. We suggest that this effort is an instance of a “collective effects” problem, where the population-level effects of a policy are different from its effects at the individual level. In such contexts, we argue that it is important to consider broader population effects as well as the difference between “treated” and “untreated” individuals. To do so, we present differences in inverse propensity score weighted distributions to investigate how this curricular policy changed the distribution of student achievement more broadly. We find that California's attempt to intensify the curriculum did not raise test scores at the bottom of the distribution, but did lower scores at the top of the distribution. These results highlight the efficacy of inverse propensity score weighting approaches for estimating collective effects, and provide a cautionary tale for curricular intensification efforts and other policies with collective effects. PMID:26004485

  18. Optimization of Water Distribution and Water Quality by Genetic Algorithm and Nonlinear Programming

    NASA Astrophysics Data System (ADS)

    Tu, M.; Tsai, F. T.; Yeh, W. W.

    2001-12-01

    When managing a regional water distribution system, it is not only important to optimize water allocation but also to meet the desired water quality requirements. This paper develops a multicommodity flow model that can be used to optimize water distribution and water quality in a regional water supply system. Waters from different sources with different quality are considered as distinct commodities, which concurrently share a single water distribution system. Volumetric water blend is used to represent water quality in the proposed model. The multicommodity model is capable of handling two-way flow pipes, as represented undirectional arcs, and the perfect mixing condition. Additionally, blending requirements are specified at certain control nodes within the water distribution system to ensure that downstream users receive the desired water quality. The developed multicommodity flow model is imbedded in a nonlinear optimization model. To reduce nonlinearity and to improve convergence, GA is combined with a gradient-based-algorithm to solve the nonlinearly constrained optimization model in that GA is used to search for the optimal direction for all undirectional arcs in the system and iteratively linked with a nonlinear programming solver. The proposed methodology was first tested and verified on a simplified hypothetical system and then applied to the regional water distribution system of the Metropolitan Water District of Southern California. The results obtained indicate that the optimization model can efficiently allocate waters from different sources with different quality to satisfy the blending requirements, the perfect mixing and two-way flow conditions.

  19. Distributed Particle Swarm Optimization and Simulated Annealing for Energy-efficient Coverage in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    The limited energy supply of wireless sensor networks poses a great challenge for the deployment of wireless sensor nodes. In this paper, we focus on energy-efficient coverage with distributed particle swarm optimization and simulated annealing. First, the energy-efficient coverage problem is formulated with sensing coverage and energy consumption models. We consider the network composed of stationary and mobile nodes. Second, coverage and energy metrics are presented to evaluate the coverage rate and energy consumption of a wireless sensor network, where a grid exclusion algorithm extracts the coverage state and Dijkstra's algorithm calculates the lowest cost path for communication. Then, a hybrid algorithm optimizes the energy consumption, in which particle swarm optimization and simulated annealing are combined to find the optimal deployment solution in a distributed manner. Simulated annealing is performed on multiple wireless sensor nodes, results of which are employed to correct the local and global best solution of particle swarm optimization. Simulations of wireless sensor node deployment verify that coverage performance can be guaranteed, energy consumption of communication is conserved after deployment optimization and the optimization performance is boosted by the distributed algorithm. Moreover, it is demonstrated that energy efficiency of wireless sensor networks is enhanced by the proposed optimization algorithm in target tracking applications.

  20. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  1. On the optimality of individual entangling-probe attacks against BB84 quantum key distribution

    NASA Astrophysics Data System (ADS)

    Herbauts, I. M.; Bettelli, S.; Hã¼bel, H.; Peev, M.

    2008-02-01

    Some MIT researchers [Phys. Rev. A 75, 042327 (2007)] have recently claimed that their implementation of the Slutsky-Brandt attack [Phys. Rev. A 57, 2383 (1998); Phys. Rev. A 71, 042312 (2005)] to the BB84 quantum-key-distribution (QKD) protocol puts the security of this protocol “to the test” by simulating “the most powerful individual-photon attack” [Phys. Rev. A 73, 012315 (2006)]. A related unfortunate news feature by a scientific journal [G. Brumfiel, Quantum cryptography is hacked, News @ Nature (april 2007); Nature 447, 372 (2007)] has spurred some concern in the QKD community and among the general public by misinterpreting the implications of this work. The present article proves the existence of a stronger individual attack on QKD protocols with encrypted error correction, for which tight bounds are shown, and clarifies why the claims of the news feature incorrectly suggest a contradiction with the established “old-style” theory of BB84 individual attacks. The full implementation of a quantum cryptographic protocol includes a reconciliation and a privacy-amplification stage, whose choice alters in general both the maximum extractable secret and the optimal eavesdropping attack. The authors of [Phys. Rev. A 75, 042327 (2007)] are concerned only with the error-free part of the so-called sifted string, and do not consider faulty bits, which, in the version of their protocol, are discarded. When using the provably superior reconciliation approach of encrypted error correction (instead of error discard), the Slutsky-Brandt attack is no more optimal and does not “threaten” the security bound derived by Lütkenhaus [Phys. Rev. A 59, 3301 (1999)]. It is shown that the method of Slutsky and collaborators [Phys. Rev. A 57, 2383 (1998)] can be adapted to reconciliation with error correction, and that the optimal entangling probe can be explicitly found. Moreover, this attack fills Lütkenhaus bound, proving that it is tight (a fact which was not

  2. Determination and optimization of spatial samples for distributed measurements.

    SciTech Connect

    Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  3. Selecting radiotherapy dose distributions by means of constrained optimization problems.

    PubMed

    Alfonso, J C L; Buttazzo, G; García-Archilla, B; Herrero, M A; Núñez, L

    2014-05-01

    The main steps in planning radiotherapy consist in selecting for any patient diagnosed with a solid tumor (i) a prescribed radiation dose on the tumor, (ii) bounds on the radiation side effects on nearby organs at risk and (iii) a fractionation scheme specifying the number and frequency of therapeutic sessions during treatment. The goal of any radiotherapy treatment is to deliver on the tumor a radiation dose as close as possible to that selected in (i), while at the same time conforming to the constraints prescribed in (ii). To this day, considerable uncertainties remain concerning the best manner in which such issues should be addressed. In particular, the choice of a prescription radiation dose is mostly based on clinical experience accumulated on the particular type of tumor considered, without any direct reference to quantitative radiobiological assessment. Interestingly, mathematical models for the effect of radiation on biological matter have existed for quite some time, and are widely acknowledged by clinicians. However, the difficulty to obtain accurate in vivo measurements of the radiobiological parameters involved has severely restricted their direct application in current clinical practice.In this work, we first propose a mathematical model to select radiation dose distributions as solutions (minimizers) of suitable variational problems, under the assumption that key radiobiological parameters for tumors and organs at risk involved are known. Second, by analyzing the dependence of such solutions on the parameters involved, we then discuss the manner in which the use of those minimizers can improve current decision-making processes to select clinical dosimetries when (as is generally the case) only partial information on model radiosensitivity parameters is available. A comparison of the proposed radiation dose distributions with those actually delivered in a number of clinical cases strongly suggests that solutions of our mathematical model can be

  4. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  5. OPTIMIZATION OF COMMINUTION CIRCUIT THROUGHPUT AND PRODUCT SIZE DISTRIBUTION BY SIMULATION AND CONTROL

    SciTech Connect

    S.K. Kawatra; T.C. Eisele; T. Weldum; D. Larsen; R. Mariani; J. Pletka

    2005-01-01

    The goal of this project is to improve energy efficiency of industrial crushing and grinding operations (comminution). Mathematical models of the comminution process are being used to study methods for optimizing the product size distribution, so that the amount of excessively fine material produced can be minimized. The goal is to save energy by reducing the amount of material that is ground below the target size, while simultaneously reducing the quantity of materials wasted as ''slimes'' that are too fine to be useful. This is being accomplished by mathematical modeling of the grinding circuits to determine how to correct this problem. The approaches taken included (1) Modeling of the circuit to determine process bottlenecks that restrict flowrates in one area while forcing other parts of the circuit to overgrind the material; (2) Modeling of hydrocyclones to determine the mechanisms responsible for retaining fine, high-density particles in the circuit until they are overground, and improving existing models to accurately account for this behavior; and (3) Evaluation of advanced technologies to improve comminution efficiency and produce sharper product size distributions with less overgrinding.

  6. OPTIMIZATION OF COMMINUTION CIRCUIT THROUGHPUT AND PRODUCT SIZE DISTRIBUTION BY SIMULATION AND CONTROL

    SciTech Connect

    T.C. Eisele; S.K. Kawatra; H.J. Walqui

    2004-10-01

    The goal of this project is to improve energy efficiency of industrial crushing and grinding operations (comminution). Mathematical models of the comminution process are being used to study methods for optimizing the product size distribution, so that the amount of excessively fine material produced can be minimized. The goal is to save energy by reducing the amount of material that is ground below the target size, while simultaneously reducing the quantity of materials wasted as ''slimes'' that are too fine to be useful. This is being accomplished by mathematical modeling of the grinding circuits to determine how to correct this problem. The approaches taken included (1) Modeling of the circuit to determine process bottlenecks that restrict flowrates in one area while forcing other parts of the circuit to overgrind the material; (2) Modeling of hydrocyclones to determine the mechanisms responsible for retaining fine, high-density particles in the circuit until they are overground, and improving existing models to accurately account for this behavior; and (3) Evaluation of advanced technologies to improve comminution efficiency and produce sharper product size distributions with less overgrinding.

  7. Optimization of Comminution Circuit Throughput and Product Size Distribution by Simulation and Control

    SciTech Connect

    S. K. Kawatra; T. C. Eisele; T. Weldum; D. Larsen; R. Mariani; J. Pletka

    2005-03-31

    The goal of this project is to improve energy efficiency of industrial crushing and grinding operations (comminution). Mathematical models of the comminution process are being used to study methods for optimizing the product size distribution, so that the amount of excessively fine material produced can be minimized. The goal is to save energy by reducing the amount of material that is ground below the target size, while simultaneously reducing the quantity of materials wasted as ''slimes'' that are too fine to be useful. This is being accomplished by mathematical modeling of the grinding circuits to determine how to correct this problem. The approaches taken included (1) Modeling of the circuit to determine process bottlenecks that restrict flow rates in one area while forcing other parts of the circuit to overgrind the material; (2) Modeling of hydrocyclones to determine the mechanisms responsible for retaining fine, high-density particles in the circuit until they are overground, and improving existing models to accurately account for this behavior; and (3) Evaluation of advanced technologies to improve comminution efficiency and produce sharper product size distributions with less overgrinding.

  8. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  9. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  10. A multiscale optimization approach to detect exudates in the macula.

    PubMed

    Agurto, Carla; Murray, Victor; Yu, Honggang; Wigdahl, Jeffrey; Pattichis, Marios; Nemeth, Sheila; Barriga, E Simon; Soliz, Peter

    2014-07-01

    Pathologies that occur on or near the fovea, such as clinically significant macular edema (CSME), represent high risk for vision loss. The presence of exudates, lipid residues of serous leakage from damaged capillaries, has been associated with CSME, in particular if they are located one optic disc-diameter away from the fovea. In this paper, we present an automatic system to detect exudates in the macula. Our approach uses optimal thresholding of instantaneous amplitude (IA) components that are extracted from multiple frequency scales to generate candidate exudate regions. For each candidate region, we extract color, shape, and texture features that are used for classification. Classification is performed using partial least squares (PLS). We tested the performance of the system on two different databases of 652 and 400 images. The system achieved an area under the receiver operator characteristic curve (AUC) of 0.96 for the combination of both databases and an AUC of 0.97 for each of them when they were evaluated independently.

  11. [Niacin--an additive therapeutic approach for optimizing lipid profile].

    PubMed

    Wieneke, Heinrich; Schmermund, Axel; Erbel, Raimund

    2005-04-15

    Large interventional studies have shown that the reduction of total cholesterol and low-density lipoprotein cholesterol (LDL-C) is one of the cornerstones in the prevention of coronary artery disease. However, in up to 40% of patients the recommended target of LDL-C is not reached with a monotherapy. Furthermore, risk stratification only by LDL-C disregards a substantial number of patients with dyslipidemia with increased triglycerides and decreased high-density lipoprotein cholesterol (HDL-C). In consequence, niacin has gained attention as a component of a combined therapeutic approach in patients with dyslipidemia. Niacin substantially increases HDL-C and decreases triglycerides, LDL-C and lipoprotein (a). By this mechanism of action niacin exhibited, in combination with statins or bile acid-binding resins, favorable effects on the incidence of cardiovascular events in selected patients. Side effects like flush and hepatotoxicity seem to be in part dependent on the niacin formulations used. However, niacin has been shown to be a well-tolerated and safe therapy in controlled studies. On the basis of current data niacin should be considered a valuable therapy component in patients with dyslipidemia, in which a monotherapy fails to optimize an increased risk of coronary artery disease.

  12. A systematic approach: optimization of healthcare operations with knowledge management.

    PubMed

    Wickramasinghe, Nilmini; Bali, Rajeev K; Gibbons, M Chris; Choi, J H James; Schaffer, Jonathan L

    2009-01-01

    Effective decision making is vital in all healthcare activities. While this decision making is typically complex and unstructured, it requires the decision maker to gather multispectral data and information in order to make an effective choice when faced with numerous options. Unstructured decision making in dynamic and complex environments is challenging and in almost every situation the decision maker is undoubtedly faced with information inferiority. The need for germane knowledge, pertinent information and relevant data are critical and hence the value of harnessing knowledge and embracing the tools, techniques, technologies and tactics of knowledge management are essential to ensuring efficiency and efficacy in the decision making process. The systematic approach and application of knowledge management (KM) principles and tools can provide the necessary foundation for improving the decision making processes in healthcare. A combination of Boyd's OODA Loop (Observe, Orient, Decide, Act) and the Intelligence Continuum provide an integrated, systematic and dynamic model for ensuring that the healthcare decision maker is always provided with the appropriate and necessary knowledge elements that will help to ensure that healthcare decision making process outcomes are optimized for maximal patient benefit. The example of orthopaedic operating room processes will illustrate the application of the integrated model to support effective decision making in the clinical environment.

  13. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance

  14. Cooperative Management of a Lithium-Ion Battery Energy Storage Network: A Distributed MPC Approach

    SciTech Connect

    Fang, Huazhen; Wu, Di; Yang, Tao

    2016-12-12

    This paper presents a study of cooperative power supply and storage for a network of Lithium-ion energy storage systems (LiBESSs). We propose to develop a distributed model predictive control (MPC) approach for two reasons. First, able to account for the practical constraints of a LiBESS, the MPC can enable a constraint-aware operation. Second, a distributed management can cope with a complex network that integrates a large number of LiBESSs over a complex communication topology. With this motivation, we then build a fully distributed MPC algorithm from an optimization perspective, which is based on an extension of the alternating direction method of multipliers (ADMM) method. A simulation example is provided to demonstrate the effectiveness of the proposed algorithm.

  15. A Novel Paradigm for Computer-Aided Design: TRIZ-Based Hybridization of Topologically Optimized Density Distributions

    NASA Astrophysics Data System (ADS)

    Cardillo, A.; Cascini, G.; Frillici, F. S.; Rotini, F.

    In a recent project the authors have proposed the adoption of Optimization Systems [1] as a bridging element between Computer-Aided Innovation (CAI) and PLM to identify geometrical contradictions [2], a particular case of the TRIZ physical contradiction [3]. A further development of the research [4] has revealed that the solutions obtained from several topological optimizations can be considered as elementary customized modeling features for a specific design task. The topology overcoming the arising geometrical contradiction can be obtained through a manipulation of the density distributions constituting the conflicting pair. Already two strategies of density combination have been identified as capable to solve geometrical contradictions and several others are under extended testing. The paper illustrates the most recent results of the ongoing research mainly related to the extension of the algorithms from 2D to 3D design spaces. The whole approach is clarified by means of two detailed examples, where the proposed technique is compared with classical multi-goal optimization.

  16. Research on solving the optimal sizing and siting of distributed generation

    NASA Astrophysics Data System (ADS)

    Wang, Bo

    2017-01-01

    In this paper, a distributed network planning model is proposed with the goal of minimizing the sum of distributed power investment cost, network loss and interruption cost. In order to compare the performance of differential evolution algorithm (DE) and genetic algorithm (GA) in solving the optimal sizing and siting of distributed generation in distribution networks, the two algorithms were adopted to optimize the capacities and positions of DGs. Through analysis on a 10-bus test system, the study results show that the proposed model and algorithm can get reasonable planning scheme. And in solving simple optimization problems, both GA and DE Algorithms can get good results, but compare to DE, GA is of slow convergence speed and the convergence process is not quite stable.

  17. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  18. Data Collection for Mobile Group Consumption: An Asynchronous Distributed Approach.

    PubMed

    Zhu, Weiping; Chen, Weiran; Hu, Zhejie; Li, Zuoyou; Liang, Yue; Chen, Jiaojiao

    2016-04-06

    Mobile group consumption refers to consumption by a group of people, such as a couple, a family, colleagues and friends, based on mobile communications. It differs from consumption only involving individuals, because of the complex relations among group members. Existing data collection systems for mobile group consumption are centralized, which has the disadvantages of being a performance bottleneck, having single-point failure and increasing business and security risks. Moreover, these data collection systems are based on a synchronized clock, which is often unrealistic because of hardware constraints, privacy concerns or synchronization cost. In this paper, we propose the first asynchronous distributed approach to collecting data generated by mobile group consumption. We formally built a system model thereof based on asynchronous distributed communication. We then designed a simulation system for the model for which we propose a three-layer solution framework. After that, we describe how to detect the causality relation of two/three gathering events that happened in the system based on the collected data. Various definitions of causality relations based on asynchronous distributed communication are supported. Extensive simulation results show that the proposed approach is effective for data collection relating to mobile group consumption.

  19. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    PubMed

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  20. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    PubMed Central

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method. PMID:28781591

  1. Facility optimization to improve activation rate distributions during IVNAA.

    PubMed

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-05-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body.

  2. Facility optimization to improve activation rate distributions during IVNAA

    PubMed Central

    Ebrahimi Khankook, Atiyeh; Rafat Motavalli, Laleh; Miri Hakimabad, Hashem

    2013-01-01

    Currently, determination of body composition is the most useful method for distinguishing between certain diseases. The prompt-gamma in vivo neutron activation analysis (IVNAA) facility for non-destructive elemental analysis of the human body is the gold standard method for this type of analysis. In order to obtain accurate measurements using the IVNAA system, the activation probability in the body must be uniform. This can be difficult to achieve, as body shape and body composition affect the rate of activation. The aim of this study was to determine the optimum pre-moderator, in terms of material for attaining uniform activation probability with a CV value of about 10% and changing the collimator role to increase activation rate within the body. Such uniformity was obtained with a high thickness of paraffin pre-moderator, however, because of increasing secondary photon flux received by the detectors it was not an appropriate choice. Our final calculations indicated that using two paraffin slabs with a thickness of 3 cm as a pre-moderator, in the presence of 2 cm Bi on the collimator, achieves a satisfactory distribution of activation rate in the body. PMID:23386375

  3. Job optimization in ATLAS TAG-based distributed analysis

    NASA Astrophysics Data System (ADS)

    Mambelli, M.; Cranshaw, J.; Gardner, R.; Maeno, T.; Malon, D.; Novak, M.

    2010-04-01

    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ("skimming", "slimming" and "thinning") as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.

  4. Parallel/distributed simulation via event-reservation approach for parametric study of discrete event systems

    NASA Astrophysics Data System (ADS)

    Bhatti, Ghulam M.; Vakili, Pirooz

    1997-06-01

    There are significant opportunities for the development of parallel/distributed simulation algorithms in the context of parametric study of discrete event systems. In such studies, simulation of multiple (often a large number of) parametric variants is required in order to, for example, identify significant parameters (factor screening), determine directions for response improvement (gradient estimation), find optimal parameter settings (response optimization), or construct a model of the response (meta-modeling). The computational burden in this case is to a large extent due to the large number of alternatives that need to be simulated. An effective strategy in this context is to concurrently simulate a number of parametric variants: the structural similarity of the variants often allows for significant amount of sharing of the simulation work, and the code for concurrent simulation of the variants can often be implemented in a parallel/distributed environment. In this paper, we describe two methods of parallel/distributed/concurrent simulation called the standard clock (SC) and the general shared clock (GSC) simulation. Both approaches rely on an event-reservation approach: by contrast to most discrete-event simulation approaches that are based on an event-scheduling approach, in the SC and GSC simulation, the occurrence instances of all events are reserved on the time axis. These instances may or may not be used. This event-reservation approach frees the clock mechanism of the simulation from needing feedback from the state-update mechanism. Due to this autonomy of the clock mechanism, a single clock can be used to drive a number (possibly large) of variants concurrently and in parallel. The autonomy of the clock mechanism is also the key to the different implementation strategies we adopt. To illustrate, we describe the simulation of parametric versions of wireless communication networks on message passing and shared memory environments.

  5. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    NASA Astrophysics Data System (ADS)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  6. A Distributed Flocking Approach for Information Stream Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  7. Eigensensitivity based optimal distribution of a viscoelastic damping layer for a flexible beam

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Woo; Kim, Ji-Hwan

    2004-05-01

    In this paper, optimal distribution of a viscoelastic damping layer is sought for suppression of the transient vibration of a flexible beam. For the damping design, eigenvalues in the range of interest are taken as design criteria rather than the responses at a specific frequency. Two eigensensitivity based optimizing procedures are proposed, which are analogous to the pole placement technique and optimal control theory for dynamic system design. For the eigenanalysis of the structure with frequency-dependent material, Golla-Hughes-McTavish (GHM) model is used in expressing the viscoelastic material property and an approximate eigensolution is employed to avoid the intensity of iterative computation in the optimization process which is caused by additional degrees of freedom due to GHM modelling. Optimized partial coverage configurations are illustrated and compared to the full coverage configuration demonstrating the improved vibration characteristic of the optimally layered structure.

  8. Parameter identification of a distributed runoff model by the optimization software Colleo

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Anai, Hirokazu; Iwami, Yoichi

    2015-04-01

    The introduction of Colleo (Collection of Optimization software) is presented and case studies of parameter identification for a distributed runoff model are illustrated. In order to calculate discharge of rivers accurately, a distributed runoff model becomes widely used to take into account various land usage, soil-type and rainfall distribution. Feasibility study of parameter optimization is desired to be done in two steps. The first step is to survey which optimization algorithms are suitable for the problems of interests. The second step is to investigate the performance of the specific optimization algorithm. Most of the previous studies seem to focus on the second step. This study will focus on the first step and complement the previous studies. Many optimization algorithms have been proposed in the computational science field and a large number of optimization software have been developed and opened to the public with practically applicable performance and quality. It is well known that it is important to use suitable algorithms for the problems to obtain good optimization results efficiently. In order to achieve algorithm comparison readily, optimization software is needed with which performance of many algorithms can be compared and can be connected to various simulation software. Colleo is developed to satisfy such needs. Colleo provides a unified user interface to several optimization software such as pyOpt, NLopt, inspyred and R and helps investigate the suitability of optimization algorithms. 74 different implementations of optimization algorithms, Nelder-Mead, Particle Swarm Optimization and Genetic Algorithm, are available with Colleo. The effectiveness of Colleo was demonstrated with the cases of flood events of the Gokase River basin in Japan (1820km2). From 2002 to 2010, there were 15 flood events, in which the discharge exceeded 1000m3/s. The discharge was calculated with the PWRI distributed hydrological model developed by ICHARM. The target

  9. Parallel genetic algorithm with population-based sampling approach to discrete optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Subramanian, Nithya

    Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of

  10. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  11. [Spatial distribution of soil animals: a geostatistical approach].

    PubMed

    Gongal'skiĭ, K B; Zaĭtsev, A S; Savin, F A

    2009-01-01

    Spatial distribution is one of the main parameters of populations of soil animals. Spatial soil ecology having been developing during last decades bases animal distribution estimates on the geostatistic approach. A simple principle underlying the latter's methodology is that samples placed close to each other have more similarity than those distantly placed, it is usually called autocorrelation. The principles of basic statistics cannot be applied to autocorrelated data. Apiplying variograms, Mantel test, Moran index, and SADIE statistics enables to reveal the size of clusters of both soil parameters and soil animal aggregations. This direction of investigations quite popular in the western literature is just rarely employed by Russian soil ecologists. Statistically correct procedures allow developing field sampling methodology that is vital in applied studies of soil ecology, namely, in bioindication and ecotoxicology of soils, in the assessment of biological resources in terms of abundance and biomass of soil animals. This methodology has a decisive importance in the development of soil biogeography.

  12. A novel, optimized approach of voxel division for water vapor tomography

    NASA Astrophysics Data System (ADS)

    Yao, Yibin; Zhao, Qingzhi

    2017-02-01

    Water vapor information with highly spatial and temporal resolution can be acquired using Global Navigation Satellite System (GNSS) water vapor tomography technique. Usually, the targeted tomographic area is discretized into a number of voxels and the water vapor distribution can be reconstructed using a large number of GNSS signals which penetrate the entire tomographic area. Due to the influence of geographic distribution of receivers and geometric location of satellite constellation, many voxels located at the bottom and the side of research area are not crossed by signals, which would undermine the quality of tomographic result. To alleviate this problem, a novel, optimized approach of voxel division is here proposed which increases the number of voxels crossed by signals. On the vertical axis, a 3D water vapor profile is utilized, which is derived from radiosonde data for many years, to identify the maximum height of tomography space. On the horizontal axis, the total number of voxel crossed by signal is enhanced, based on the concept of non-uniform symmetrical division of horizontal voxels. In this study, tomographic experiments are implemented using GPS data from Hong Kong Satellite Positioning Reference Station Network, and tomographic result is compared with water vapor derived from radiosonde and European Center for Medium-Range Weather Forecasting (ECMWF). The result shows that the Integrated Water Vapour (IWV), RMS, and error distribution of the proposed approach are better than that of traditional method.

  13. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  14. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  15. The redshift distribution of cosmological samples: a forward modeling approach

    NASA Astrophysics Data System (ADS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  16. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    NASA Astrophysics Data System (ADS)

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-01

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  17. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    SciTech Connect

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-17

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  18. Estimating the optimal threshold for a diagnostic biomarker in case of complex biomarker distributions

    PubMed Central

    2014-01-01

    Background Estimating the optimal threshold (and especially the confidence interval) of a quantitative biomarker to be used as a diagnostic test is essential for medical decision-making. This is often done with simple methods that are not always reliable. More advanced methods work well but only for biomarkers with very simple distributions. In fact, biomarker distributions are often complex because of a natural heterogeneity in marker expression and other heterogeneities due to various disease stages, laboratory equipments, etc. Methods are required to estimate a biomarker optimal threshold in case of heterogeneity and complex distributions. Methods A previously described Bayesian method developed for normally distributed biomarkers is applied to two flexible distributions; namely, a Student-t and a mixture of Dirichlet processes. Here, numerical studies assess the adequacy of the previous method with both distributions. Two applications are presented: the diagnosis of treatment failure after prostate cancer treated by ultrasound and the early diagnosis of cancers of the upper aerodigestive tract. Results Bayesian inference provided reliable credible intervals in terms of bias and coverage probability. The two distributions analysed gave meaningful clinical interpretations in both applications. Conclusions Reliable methods can be used to estimate a biomarker optimal threshold, even in case of complex distributions. PMID:24927622

  19. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    PubMed Central

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  20. An Informatics Approach to Demand Response Optimization in Smart Grids

    SciTech Connect

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  1. Experiments with ROPAR, an approach for probabilistic analysis of the optimal solutions' robustness

    NASA Astrophysics Data System (ADS)

    Marquez, Oscar; Solomatine, Dimitri

    2016-04-01

    Robust optimization is defined as the search for solutions and performance results which remain reasonably unchanged when exposed to uncertain conditions such as natural variability in input variables, parameter drifts during operation time, model sensitivities and others [1]. In the present study we follow the approach named ROPAR (multi-objective robust optimization allowing for explicit analysis of robustness (see online publication [2]). Its main idea is in: a) sampling the vectors of uncertain factors; b) solving MOO problem for each of them obtaining multiple Pareto sets; c) analysing the statistical properties (distributions) of the subsets of these Pareto sets corresponding to different conditions (e.g. based on constraints formulated for the objective functions values of other system variables); d) selecting the robust solutions. The paper presents the results of experiments with the two case studies: 1) a benchmark function ZDT1 (with an uncertain factor) often used in algorithms comparisons, and 2) a problem of drainage network rehabilitation that uses SWMM hydrodynamic model (the rainfall is assumed to be an uncertain factor). This study is partly supported by the FP7 European Project WeSenseIt Citizen Water Observatory (www.http://wesenseit.eu/) and the CONACYT (Mexico's National Council of Science and Technology) supporting the PhD study of the first author. References [1] H.G.Beyer and B. Sendhoff. "Robust optimization - A comprehensive survey." Comput. Methods Appl. Mech. Engrg., 2007: 3190-3218. [2] D.P. Solomatine (2012). An approach to multi-objective robust optimization allowing for explicit analysis of robustness (ROPAR). UNESCO-IHE. Online publication. Web: https://www.unesco-ihe.org/sites/default/files/solomatine-ropar.pdf

  2. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to

  3. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  4. A uniform approach for programming distributed heterogeneous computing systems.

    PubMed

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  5. Traffic network and distribution of cars: Maximum-entropy approach

    SciTech Connect

    Das, N.C.; Chakrabarti, C.G.; Mazumder, S.K.

    2000-02-01

    An urban transport system plays a vital role in the modeling of the modern cosmopolis. A great emphasis is needed for the proper development of a transport system, particularly the traffic network and flow, to meet possible future demand. There are various mathematical models of traffic network and flow. The role of Shannon entropy in the modeling of traffic network and flow was stressed by Tomlin and Tomlin (1968) and Tomlin (1969). In the present note the authors study the role of maximum-entropy principle in the solution of an important problem associated with the traffic network flow. The maximum-entropy principle initiated by Jaynes is a powerful optimization technique of determining the distribution of a random system in the case of partial or incomplete information or data available about the system. This principle has now been broadened and extended and has found wide applications in different fields of science and technology. In the present note the authors show how the Jaynes' maximum-entropy principle, slightly modified, can be successfully applied in determining the flow or distribution of cars in different paths of a traffic network when incomplete information is available about the network.

  6. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  7. An optimized web-based approach for collaborative stereoscopic medical visualization

    PubMed Central

    Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C

    2013-01-01

    Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three

  8. An optimized web-based approach for collaborative stereoscopic medical visualization.

    PubMed

    Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C

    2013-05-01

    Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three-dimensional, stereoscopic, collaborative and interactive visualization.

  9. Optimal investment and scheduling of distributed energy resources with uncertainty in electric vehicles driving schedules

    SciTech Connect

    Cardoso, Goncalo; Stadler, Michael; Bozchalui, Mohammed C.; Sharma, Ratnesh; Marnay, Chris; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-12-06

    The large scale penetration of electric vehicles (EVs) will introduce technical challenges to the distribution grid, but also carries the potential for vehicle-to-grid services. Namely, if available in large enough numbers, EVs can be used as a distributed energy resource (DER) and their presence can influence optimal DER investment and scheduling decisions in microgrids. In this work, a novel EV fleet aggregator model is introduced in a stochastic formulation of DER-CAM [1], an optimization tool used to address DER investment and scheduling problems. This is used to assess the impact of EV interconnections on optimal DER solutions considering uncertainty in EV driving schedules. Optimization results indicate that EVs can have a significant impact on DER investments, particularly if considering short payback periods. Furthermore, results suggest that uncertainty in driving schedules carries little significance to total energy costs, which is corroborated by results obtained using the stochastic formulation of the problem.

  10. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the optimization of the lift distribution across the wing of an aircraft in formation flight. The usual elliptical distribution is no longer optimal for the trailing wing in the formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  11. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  12. Optimal blood sampling time windows for parameter estimation using a population approach: design of a phase II clinical trial.

    PubMed

    Chenel, Marylore; Ogungbenro, Kayode; Duval, Vincent; Laveille, Christian; Jochemsen, Roeline; Aarons, Leon

    2005-12-01

    The objective of this paper is to determine optimal blood sampling time windows for the estimation of pharmacokinetic (PK) parameters by a population approach within the clinical constraints. A population PK model was developed to describe a reference phase II PK dataset. Using this model and the parameter estimates, D-optimal sampling times were determined by optimising the determinant of the population Fisher information matrix (PFIM) using PFIM_ _M 1.2 and the modified Fedorov exchange algorithm. Optimal sampling time windows were then determined by allowing the D-optimal windows design to result in a specified level of efficiency when compared to the fixed-times D-optimal design. The best results were obtained when K(a) and IIV on K(a) were fixed. Windows were determined using this approach assuming 90% level of efficiency and uniform sample distribution. Four optimal sampling time windows were determined as follow: at trough between 22 h and new drug administration; between 2 and 4 h after dose for all patients; and for 1/3 of the patients only 2 sampling time windows between 4 and 10 h after dose, equal to [4 h-5 h 05] and [9 h 10-10 h]. This work permitted the determination of an optimal design, with suitable sampling time windows which was then evaluated by simulations. The sampling time windows will be used to define the sampling schedule in a prospective phase II study.

  13. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  14. Optimal cloning of qubits given by an arbitrary axisymmetric distribution on the Bloch sphere

    SciTech Connect

    Bartkiewicz, Karol; Miranowicz, Adam

    2010-10-15

    We find an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector with the highest fidelity. The process is referred to as phase-independent cloning in contrast to the standard phase-covariant cloning for which an input qubit state is a priori better known. We assume that the information about the input state is encoded in an arbitrary axisymmetric distribution (phase function) on the Bloch sphere of the cloned qubits. We find analytical expressions describing the optimal cloning transformation and fidelity of the clones. As an illustration, we analyze cloning of qubit state described by the von Mises-Fisher and Brosseau distributions. Moreover, we show that the optimal phase-independent cloning machine can be implemented by modifying the mirror phase-covariant cloning machine for which quantum circuits are known.

  15. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  16. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  17. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Astrophysics Data System (ADS)

    Schuster, David M.

    1993-04-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  18. Mean Performance Optimization of an Orbiting Distributed Aperture by Warped Aperture Image Plane Comparisons

    DTIC Science & Technology

    2002-09-01

    7-6 v Page 7.4 Viewing Geometry Effects on Io(ξ, η) . . . . . . . . . . . . . 7-10 7.5 Optimized Formations for Inertial... Effects of Each Aligning Rotation on Receiver Loci . . . . . . . . . 7-11 7.6. Effects of Each Aligning Rotation on Io(ξ, η)? . . . . . . . . . . . . 7-11...this solution to include the effects of non-ideal viewing geometries. xvi MEAN PERFORMANCE OPTIMIZATION OF AN ORBITING DISTRIBUTED APERTURE BY WARPED

  19. A modal approach to modeling spatially distributed vibration energy dissipation.

    SciTech Connect

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  20. Evaluation of multi-algorithm optimization approach in multi-objective rainfall-runoff calibration

    NASA Astrophysics Data System (ADS)

    Shafii, M.; de Smedt, F.

    2009-04-01

    Calibration of rainfall-runoff models is one of the issues in which hydrologists have been interested over past decades. Because of the multi-objective nature of rainfall-runoff calibration, and due to advances in computational power, population-based optimization techniques are becoming increasingly popular to be applied for multi-objective calibration schemes. Over past recent years, such methods have shown to be powerful search methods for this purpose, especially when there are a large number of calibration parameters. However, application of these methods is always criticised based on the fact that it is not possible to develop a single algorithm which is always efficient for different problems. Therefore, more recent efforts have been focused towards development of simultaneous multiple optimization algorithms to overcome this drawback. This paper involves one of the most recent population-based multi-algorithm approaches, named AMALGAM, for application to multi-objective rainfall-runoff calibration in a distributed hydrological model, WetSpa. This algorithm merges the strengths of different optimization algorithms and it, thus, has proven to be more efficient than other methods. In order to evaluate this issue, comparison between results of this paper and those previously reported using a normal multi-objective evolutionary algorithm would be the next step of this study.

  1. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  2. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  3. TH-C-BRD-10: An Evaluation of Three Robust Optimization Approaches in IMPT Treatment Planning

    SciTech Connect

    Cao, W; Randeniya, S; Mohan, R; Zaghian, M; Kardar, L; Lim, G; Liu, W

    2014-06-15

    Purpose: Various robust optimization approaches have been proposed to ensure the robustness of intensity modulated proton therapy (IMPT) in the face of uncertainty. In this study, we aim to investigate the performance of three classes of robust optimization approaches regarding plan optimality and robustness. Methods: Three robust optimization models were implemented in our in-house IMPT treatment planning system: 1) L2 optimization based on worst-case dose; 2) L2 optimization based on minmax objective; and 3) L1 optimization with constraints on all uncertain doses. The first model was solved by a L-BFGS algorithm; the second was solved by a gradient projection algorithm; and the third was solved by an interior point method. One nominal scenario and eight maximum uncertainty scenarios (proton range over and under 3.5%, and setup error of 5 mm for x, y, z directions) were considered in optimization. Dosimetric measurements of optimized plans from the three approaches were compared for four prostate cancer patients retrospectively selected at our institution. Results: For the nominal scenario, all three optimization approaches yielded the same coverage to the clinical treatment volume (CTV) and the L2 worst-case approach demonstrated better rectum and bladder sparing than others. For the uncertainty scenarios, the L1 approach resulted in the most robust CTV coverage against uncertainties, while the plans from L2 worst-case were less robust than others. In addition, we observed that the number of scanning spots with positive MUs from the L2 approaches was approximately twice as many as that from the L1 approach. This indicates that L1 optimization may lead to more efficient IMPT delivery. Conclusion: Our study indicated that the L1 approach best conserved the target coverage in the face of uncertainty but its resulting OAR sparing was slightly inferior to other two approaches.

  4. Optimal control of wave-packets: a semiclassical approach

    NASA Astrophysics Data System (ADS)

    Darío Guerrero, Rubén; Arango, Carlos A.; Reyes, Andrés

    2014-02-01

    We studied the optimal quantum control of a molecular rotor in tilted laser fields using the time-sliced Herman-Kluk propagator for the evaluation of the optimal pulse and the light-dipole interaction as the control mechanism. The proposed methodology was used to study the effects of an optimal pulse on the evolution of a wave-packet in a double-well potential and in the effective potential of a molecular rotor in a collinear tilted fields setup. The amplitude and frequency of the control pulse were obtained in such a way that the transition probability between two rotational wave-packets was maximised.

  5. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  6. A decision theoretic approach to optimization of multiple testing procedures.

    PubMed

    Lisovskaja, Vera; Burman, Carl-Fredrik

    2015-01-01

    This paper focuses on the concept of optimizing a multiple testing procedure (MTP) with respect to a predefined utility function. The class of Bonferroni-based closed testing procedures, which includes, for example, (weighted) Holm, fallback, gatekeeping, and recycling/graphical procedures, is used in this context. Numerical algorithms for calculating expected utility for some MTPs in this class are given. The obtained optimal procedures, as well as the gain resulting from performing an optimization are then examined in a few, but informative, examples. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. RAPID OPTIMAL SPH PARTICLE DISTRIBUTIONS IN SPHERICAL GEOMETRIES FOR CREATING ASTROPHYSICAL INITIAL CONDITIONS

    SciTech Connect

    Raskin, Cody; Owen, J. Michael

    2016-04-01

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such as planets with core–mantle boundaries.

  8. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  9. Optimal control of first order distributed systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Johnson, T. L.

    1972-01-01

    The problem of characterizing optimal controls for a class of distributed-parameter systems is considered. The system dynamics are characterized mathematically by a finite number of coupled partial differential equations involving first-order time and space derivatives of the state variables, which are constrained at the boundary by a finite number of algebraic relations. Multiple control inputs, extending over the entire spatial region occupied by the system ("distributed controls') are to be designed so that the response of the system is optimal. A major example involving boundary control of an unstable low-density plasma is developed from physical laws.

  10. Active and Reactive Power Optimal Dispatch Associated with Load and DG Uncertainties in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.

    2017-05-01

    In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.

  11. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes.

  12. Optimal probabilistic cloning of two linearly independent states with arbitrary probability distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Rui, Pinshu; Zhang, Ziyun; Liao, Yanlin

    2016-02-01

    We investigate the probabilistic quantum cloning (PQC) of two states with arbitrary probability distribution. The optimal success probabilities are worked out for 1→ 2 PQC of the two states. The results show that the upper bound on the success probabilities of PQC in Qiu (J Phys A 35:6931-6937, 2002) cannot be reached in general. With the optimal success probabilities, we design simple forms of 1→ 2 PQC and work out the unitary transformation needed in the PQC processes. The optimal success probabilities for 1→ 2 PQC are also generalized to the M→ N PQC case.

  13. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  14. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  15. Using R for Global Optimization of a Fully-distributed Hydrologic Model at Continental Scale

    NASA Astrophysics Data System (ADS)

    Zambrano-Bigiarini, M.; Zajac, Z.; Salamon, P.

    2013-12-01

    Nowadays hydrologic model simulations are widely used to better understand hydrologic processes and to predict extreme events such as floods and droughts. In particular, the spatially distributed LISFLOOD model is currently used for flood forecasting at Pan-European scale, within the European Flood Awareness System (EFAS). Several model parameters can not be directly measured, and they need to be estimated through calibration, in order to constrain simulated discharges to their observed counterparts. In this work we describe how the free software 'R' has been used as a single environment to pre-process hydro-meteorological data, to carry out global optimization, and to post-process calibration results in Europe. Historical daily discharge records were pre-processed for 4062 stream gauges, with different amount and distribution of data in each one of them. The hydroTSM, raster and sp R packages were used to select ca. 700 stations with an adequate spatio-temporal coverage. Selected stations span a wide range of hydro-climatic characteristics, from arid and ET-dominated watersheds in the Iberian Peninsula to snow-dominated watersheds in Scandinavia. Nine parameters were selected to be calibrated based on previous expert knowledge. Customized R scripts were used to extract observed time series for each catchment and to prepare the input files required to fully set up the calibration thereof. The hydroPSO package was then used to carry out a single-objective global optimization on each selected catchment, by using the Standard Particle Swarm 2011 (SPSO-2011) algorithm. Among the many goodness-of-fit measures available in the hydroGOF package, the Nash-Sutcliffe efficiency was used to drive the optimization. User-defined functions were developed for reading model outputs and passing them to the calibration engine. The long computational time required to finish the calibration at continental scale was partially alleviated by using 4 multi-core machines (with both GNU

  16. The adaptive approach for storage assignment by mining data of warehouse management system for distribution centres

    NASA Astrophysics Data System (ADS)

    Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen

    2011-05-01

    Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.

  17. A practical approach for outdoors distributed target localization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Béjar, Benjamín; Zazo, Santiago

    2012-12-01

    Wireless sensor networks are posed as the new communication paradigm where the use of small, low-complexity, and low-power devices is preferred over costly centralized systems. The spectra of potential applications of sensor networks is very wide, ranging from monitoring, surveillance, and localization, among others. Localization is a key application in sensor networks and the use of simple, efficient, and distributed algorithms is of paramount practical importance. Combining convex optimization tools with consensus algorithms we propose a distributed localization algorithm for scenarios where received signal strength indicator readings are used. We approach the localization problem by formulating an alternative problem that uses distance estimates locally computed at each node. The formulated problem is solved by a relaxed version using semidefinite relaxation technique. Conditions under which the relaxed problem yields to the same solution as the original problem are given and a distributed consensus-based implementation of the algorithm is proposed based on an augmented Lagrangian approach and primal-dual decomposition methods. Although suboptimal, the proposed approach is very suitable for its implementation in real sensor networks, i.e., it is scalable, robust against node failures and requires only local communication among neighboring nodes. Simulation results show that running an additional local search around the found solution can yield performance close to the maximum likelihood estimate.

  18. An approach to a real-time distribution system