Science.gov

Sample records for distributed optimization approach

  1. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  2. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  3. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation

    PubMed Central

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  4. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783

  5. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    PubMed

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  6. Product Distributions for Distributed Optimization. Chapter 1

    NASA Technical Reports Server (NTRS)

    Bieniawski, Stefan R.; Wolpert, David H.

    2004-01-01

    With connections to bounded rational game theory, information theory and statistical mechanics, Product Distribution (PD) theory provides a new framework for performing distributed optimization. Furthermore, PD theory extends and formalizes Collective Intelligence, thus connecting distributed optimization to distributed Reinforcement Learning (FU). This paper provides an overview of PD theory and details an algorithm for performing optimization derived from it. The approach is demonstrated on two unconstrained optimization problems, one with discrete variables and one with continuous variables. To highlight the connections between PD theory and distributed FU, the results are compared with those obtained using distributed reinforcement learning inspired optimization approaches. The inter-relationship of the techniques is discussed.

  7. A Scalable and Robust Multi-Agent Approach to Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.

  8. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  9. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  10. A new systems approach to optimizing investments in gas production and distribution

    SciTech Connect

    Dougherty, E.L.

    1983-03-01

    This paper presents a new analytical approach for determining the optimal sequence of investments to make in each year of an extended planning horizon in each of a group of reservoirs producing gas and gas liquids through an interconnected trunkline network and a gas processing plant. The optimality criterion is to maximize net present value while satisfying fixed offtake requirements for dry gas, but with no limits on gas liquids production. The planning problem is broken into n + 2 separate but interrelated subproblems; gas reservoir development and production, gas flow in a trunkline gathering system, and plant separation activities to remove undesirable gas (CO/sub 2/) or to recover valuable liquid components. The optimal solution for each subproblem depends upon the optimal solutions for all of the other subproblems, so that the overall optimal solution is obtained iteratively. The iteration technique used is based upon a combination of heuristics and the decompostion algorithm of mathematical programming. Each subproblem is solved once during each overall iteration. In addition to presenting some mathematical details of the solution approach, this paper describes a computer system which has been developed to obtain solutions.

  11. Optimizing booster chlorination in water distribution networks: a water quality index approach.

    PubMed

    Islam, Nilufar; Sadiq, Rehan; Rodriguez, Manuel J

    2013-10-01

    The optimization of chlorine dosage and the number of booster locations is an important aspect of water quality management in distribution networks. Booster chlorination helps to maintain uniformity and adequacy of free residual chlorine concentration, essential for safeguarding against microbiological contamination. Higher chlorine dosages increase free residual chlorine concentration but generate harmful by-products, in addition to taste and odor complaints. It is possible to address these microbial, chemical, and aesthetic water quality issues through free residual chlorine concentration. Estimating a water quality index (WQI) based on regulatory chlorine thresholds for microbial, chemical, and aesthetics criteria can help engineers make intelligent decisions. An innovative scheme for maintaining adequate residual chlorine with optimal chlorine dosages and numbers of booster locations was established based on a proposed WQI. The City of Kelowna (BC, Canada) water distribution network served to demonstrate the application of the proposed scheme. Temporal free residual chlorine concentration predicted with EPANET software was used to estimate the WQI, later coupled with an optimization scheme. Preliminary temporal and spatial analyses identified critical zones (relatively poor water quality) in the distribution network. The model may also prove useful for small or rural communities where free residual chlorine is considered as the only water quality criterion.

  12. Anthropogenic carbon estimates in the Weddell Sea using an optimized CFC based transit time distribution approach

    NASA Astrophysics Data System (ADS)

    Huhn, Oliver; Hauck, Judith; Hoppema, Mario; Rhein, Monika; Roether, Wolfgang

    2010-05-01

    We use a 20 year time series of chlorofluorocarbon (CFC) observations along the Prime Meridian to determine the temporal evolution of anthropogenic carbon (Cant) in the two deep boundary currents which enter the Weddell Basin in the south and leave it in the north. The Cant is inferred from transit time distributions (TTDs), with parameters (mean transit time and dispersion) adjusted to the observed mean CFC histories in these recently ventilated deep boundary currents. We optimize that "classic" TTD approach by accounting for water exchange of the boundary currents with an old but not CFC and Cant free interior reservoir. This reservoir in turn, is replenished by the boundary currents, which we parameterize as first order mixing. Furthermore, we account for the time-dependence of the CFC and Cant source water saturation. A conceptual model of an ideal saturated mixed layer and exchange with adjacent water is adjusted to observed CFC saturations in the source regions. The time-dependence for the CFC saturation appears to be much weaker than for Cant. We find a mean transit time of 14 years and an advection/dispersion ratio of 5 for the deep southern boundary current. For the northern boundary current we find a mean transit time of 8 years and a much advection/dispersion ratio of 140. The fractions directly supplied by the boundary currents are in both cases in the order of 10%, while 90% are admixed from the interior reservoirs, which are replenished with a renewal time of about 14 years. We determine Cant ~ 11 umol/kg (reference year 2006) in the deep water entering the Weddell Sea in the south (~2.1 Sv), and 12 umol/kg for the deep water leaving the Weddell Sea in the north (~2.7 Sv). These Cant estimates are, however, upper limits, considering that the Cant source water saturation is likely to be lower than that for the CFCs. Comparison with Cant intrusion estimates based on extended multiple linear regression (using potential temperature, salinity, oxygen, and

  13. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  14. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  15. Distributed Energy Resource Optimization Using a Software as Service (SaaS) Approach at the University of California, Davis Campus

    SciTech Connect

    Stadler, Michael; Marnay, Chris; Donadee, Jon; Lai, Judy; Megel, Olivier; Bhattacharya, Prajesh; Siddiqui, Afzal

    2011-02-06

    Together with OSIsoft LLC as its private sector partner and matching sponsor, the Lawrence Berkeley National Laboratory (Berkeley Lab) won an FY09 Technology Commercialization Fund (TCF) grant from the U.S. Department of Energy. The goal of the project is to commercialize Berkeley Lab's optimizing program, the Distributed Energy Resources Customer Adoption Model (DER-CAM) using a software as a service (SaaS) model with OSIsoft as its first non-scientific user. OSIsoft could in turn provide optimization capability to its software clients. In this way, energy efficiency and/or carbon minimizing strategies could be made readily available to commercial and industrial facilities. Specialized versions of DER-CAM dedicated to solving OSIsoft's customer problems have been set up on a server at Berkeley Lab. The objective of DER-CAM is to minimize the cost of technology adoption and operation or carbon emissions, or combinations thereof. DER-CAM determines which technologies should be installed and operated based on specific site load, price information, and performance data for available equipment options. An established user of OSIsoft's PI software suite, the University of California, Davis (UCD), was selected as a demonstration site for this project. UCD's participation in the project is driven by its motivation to reduce its carbon emissions. The campus currently buys electricity economically through the Western Area Power Administration (WAPA). The campus does not therefore face compelling cost incentives to improve the efficiency of its operations, but is nonetheless motivated to lower the carbon footprint of its buildings. Berkeley Lab attempted to demonstrate a scenario wherein UCD is forced to purchase electricity on a standard time-of-use tariff from Pacific Gas and Electric (PG&E), which is a concern to Facilities staff. Additionally, DER-CAM has been set up to consider the variability of carbon emissions throughout the day and seasons. Two distinct analyses of

  16. Retrieval of ice crystals' mass from ice water content and particle distribution measurements: a numerical optimization approach

    NASA Astrophysics Data System (ADS)

    Coutris, Pierre; Leroy, Delphine; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    A new method to retrieve cloud water content from in-situ measured 2D particle images from optical array probes (OAP) is presented. With the overall objective to build a statistical model of crystals' mass as a function of their size, environmental temperature and crystal microphysical history, this study presents the methodology to retrieve the mass of crystals sorted by size from 2D images using a numerical optimization approach. The methodology is validated using two datasets of in-situ measurements gathered during two airborne field campaigns held in Darwin, Australia (2014), and Cayenne, France (2015), in the frame of the High Altitude Ice Crystals (HAIC) / High Ice Water Content (HIWC) projects. During these campaigns, a Falcon F-20 research aircraft equipped with state-of-the art microphysical instrumentation sampled numerous mesoscale convective systems (MCS) in order to study dynamical and microphysical properties and processes of high ice water content areas. Experimentally, an isokinetic evaporator probe, referred to as IKP-2, provides a reference measurement of the total water content (TWC) which equals ice water content, (IWC) when (supercooled) liquid water is absent. Two optical array probes, namely 2D-S and PIP, produce 2D images of individual crystals ranging from 50 μm to 12840 μm from which particle size distributions (PSD) are derived. Mathematically, the problem is formulated as an inverse problem in which the crystals' mass is assumed constant over a size class and is computed for each size class from IWC and PSD data: PSD.m = IW C This problem is solved using numerical optimization technique in which an objective function is minimized. The objective function is defined as follows: 2 J(m)=∥P SD.m - IW C ∥ + λ.R (m) where the regularization parameter λ and the regularization function R(m) are tuned based on data characteristics. The method is implemented in two steps. First, the method is developed on synthetic crystal populations in

  17. Retrieval of ice crystals' mass from ice water content and particle distribution measurements: a numerical optimization approach

    NASA Astrophysics Data System (ADS)

    Coutris, Pierre; Leroy, Delphine; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    A new method to retrieve cloud water content from in-situ measured 2D particle images from optical array probes (OAP) is presented. With the overall objective to build a statistical model of crystals' mass as a function of their size, environmental temperature and crystal microphysical history, this study presents the methodology to retrieve the mass of crystals sorted by size from 2D images using a numerical optimization approach. The methodology is validated using two datasets of in-situ measurements gathered during two airborne field campaigns held in Darwin, Australia (2014), and Cayenne, France (2015), in the frame of the High Altitude Ice Crystals (HAIC) / High Ice Water Content (HIWC) projects. During these campaigns, a Falcon F-20 research aircraft equipped with state-of-the art microphysical instrumentation sampled numerous mesoscale convective systems (MCS) in order to study dynamical and microphysical properties and processes of high ice water content areas. Experimentally, an isokinetic evaporator probe, referred to as IKP-2, provides a reference measurement of the total water content (TWC) which equals ice water content, (IWC) when (supercooled) liquid water is absent. Two optical array probes, namely 2D-S and PIP, produce 2D images of individual crystals ranging from 50 μm to 12840 μm from which particle size distributions (PSD) are derived. Mathematically, the problem is formulated as an inverse problem in which the crystals' mass is assumed constant over a size class and is computed for each size class from IWC and PSD data: PSD.m = IW C This problem is solved using numerical optimization technique in which an objective function is minimized. The objective function is defined as follows: 2 J(m)=∥P SD.m ‑ IW C ∥ + λ.R (m) where the regularization parameter λ and the regularization function R(m) are tuned based on data characteristics. The method is implemented in two steps. First, the method is developed on synthetic crystal populations in

  18. Multicriteria optimization of the spatial dose distribution

    SciTech Connect

    Schlaefer, Alexander; Viulet, Tiberiu; Muacevic, Alexander; Fürweger, Christoph

    2013-12-15

    Purpose: Treatment planning for radiation therapy involves trade-offs with respect to different clinical goals. Typically, the dose distribution is evaluated based on few statistics and dose–volume histograms. Particularly for stereotactic treatments, the spatial dose distribution represents further criteria, e.g., when considering the gradient between subregions of volumes of interest. The authors have studied how to consider the spatial dose distribution using a multicriteria optimization approach.Methods: The authors have extended a stepwise multicriteria optimization approach to include criteria with respect to the local dose distribution. Based on a three-dimensional visualization of the dose the authors use a software tool allowing interaction with the dose distribution to map objectives with respect to its shape to a constrained optimization problem. Similarly, conflicting criteria are highlighted and the planner decides if and where to relax the shape of the dose distribution.Results: To demonstrate the potential of spatial multicriteria optimization, the tool was applied to a prostate and meningioma case. For the prostate case, local sparing of the rectal wall and shaping of a boost volume are achieved through local relaxations and while maintaining the remaining dose distribution. For the meningioma, target coverage is improved by compromising low dose conformality toward noncritical structures. A comparison of dose–volume histograms illustrates the importance of spatial information for achieving the trade-offs.Conclusions: The results show that it is possible to consider the location of conflicting criteria during treatment planning. Particularly, it is possible to conserve already achieved goals with respect to the dose distribution, to visualize potential trade-offs, and to relax constraints locally. Hence, the proposed approach facilitates a systematic exploration of the optimal shape of the dose distribution.

  19. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  20. Distributed optimization and flight control using collectives

    NASA Astrophysics Data System (ADS)

    Bieniawski, Stefan Richard

    The increasing complexity of aerospace systems demands new approaches for their design and control. Approaches are required to address the trend towards aerospace systems comprised of a large number of inherently distributed and highly nonlinear components with complex and sometimes competing interactions. This work introduces collectives to address these challenges. Although collectives have been used for distributed optimization problems in computer science, recent developments based upon Probability Collectives (PC) theory enhance their applicability to discrete, continuous, mixed, and constrained optimization problems. Further, they are naturally applied to distributed systems and those involving uncertainty, such as control in the presence of noise and disturbances. This work describes collectives theory and its implementation, including its connections to multi-agent systems, machine learning, statistics, and gradient-based optimization. To demonstrate the approach, two experiments were developed. These experiments built upon recent advances in actuator technology that resulted in small, simple flow control devices. Miniature-Trailing Edge Effectors (MiTE), consisting of a small, 1-5% chord, moveable surface mounted at the wing trailing edge, are used for the experiments. The high bandwidth, distributed placement, and good control authority make these ideal candidates for rigid and flexible mode control of flight vehicles. This is demonstrated in two experiments: flutter suppression of a flexible wing, and flight control of a remotely piloted aircraft. The first experiment successfully increased the flutter speed by over 25%. The second experiment included a novel distributed flight control system based upon the MiTEs that includes distributed sensing, logic, and actuation. Flight tests validated the control capability of the MiTEs and the associated flight control architecture. The collectives approach was used to design controllers for the distributed

  1. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  2. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  3. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  4. Quantum optimal control of photoelectron spectra and angular distributions

    NASA Astrophysics Data System (ADS)

    Goetz, R. Esteban; Karamatskou, Antonia; Santra, Robin; Koch, Christiane P.

    2016-01-01

    Photoelectron spectra and photoelectron angular distributions obtained in photoionization reveal important information on, e.g., charge transfer or hole coherence in the parent ion. Here we show that optimal control of the underlying quantum dynamics can be used to enhance desired features in the photoelectron spectra and angular distributions. To this end, we combine Krotov's method for optimal control theory with the time-dependent configuration interaction singles formalism and a splitting approach to calculate photoelectron spectra and angular distributions. The optimization target can account for specific desired properties in the photoelectron angular distribution alone, in the photoelectron spectrum, or in both. We demonstrate the method for hydrogen and then apply it to argon under strong XUV radiation, maximizing the difference of emission into the upper and lower hemispheres, in order to realize directed electron emission in the XUV regime.

  5. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  6. Optimizing the configuration patterns for heterogeneous distributed sensor fields

    NASA Astrophysics Data System (ADS)

    Wettergren, Thomas A.; Costa, Russell

    2012-06-01

    When unmanned distributed sensor fields are developed for rapid deployment in hostile areas, the deployment may consist of multiple sensor types. This occurs because of the variations in expected threats and uncertainties about the details of the local environmental conditions. As more detailed information is available at deployment, the quantity and types of sensors are given and fixed, yet the specific pattern for the configuration of their deployment is still variable. We develop a new optimization approach for planning these configurations for this resource constrained sensor application. Our approach takes into account the variety of sensors available and their respective expected performance in the environment, as well as the target uncertainty. Due to the large dimensionality of the design space for this unmanned sensor planning problem, heuristic-based optimizations will provide very sub-optimal solutions and gradient-based methods lack a good quality initialization. Instead, we utilize a robust optimization procedure that combines genetic algorithms with nonlinear programming techniques to create numerical solutions for determining the optimal spatial distribution of sensing effort for each type of sensor. We illustrate the effectiveness of the approach on numerical examples, and also illustrate the qualitative difference in the optimal patterns as a function of the relative numbers of available sensors of each type. We conclude by using the optimization results to discuss the benefits of interspersing the different sensor types, as opposed to creating area sub-segmentations for each type.

  7. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.

  8. Energy optimization of water distribution systems

    SciTech Connect

    1994-09-01

    Energy costs associated with pumping treated water into the distribution system and boosting water pressures where necessary is one of the largest expenditures in the operating budget of a municipality. Due to the size and complexity of Detroit`s water transmission system, an energy optimization project has been developed to better manage the flow of water in the distribution system in an attempt to reduce these costs.

  9. Optimal Device Independent Quantum Key Distribution

    PubMed Central

    Kamaruddin, S.; Shaari, J. S.

    2016-01-01

    We consider an optimal quantum key distribution setup based on minimal number of measurement bases with binary yields used by parties against an eavesdropper limited only by the no-signaling principle. We note that in general, the maximal key rate can be achieved by determining the optimal tradeoff between measurements that attain the maximal Bell violation and those that maximise the bit correlation between the parties. We show that higher correlation between shared raw keys at the expense of maximal Bell violation provide for better key rates for low channel disturbance. PMID:27485160

  10. Optimal Device Independent Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kamaruddin, S.; Shaari, J. S.

    2016-08-01

    We consider an optimal quantum key distribution setup based on minimal number of measurement bases with binary yields used by parties against an eavesdropper limited only by the no-signaling principle. We note that in general, the maximal key rate can be achieved by determining the optimal tradeoff between measurements that attain the maximal Bell violation and those that maximise the bit correlation between the parties. We show that higher correlation between shared raw keys at the expense of maximal Bell violation provide for better key rates for low channel disturbance.

  11. Optimal Device Independent Quantum Key Distribution.

    PubMed

    Kamaruddin, S; Shaari, J S

    2016-01-01

    We consider an optimal quantum key distribution setup based on minimal number of measurement bases with binary yields used by parties against an eavesdropper limited only by the no-signaling principle. We note that in general, the maximal key rate can be achieved by determining the optimal tradeoff between measurements that attain the maximal Bell violation and those that maximise the bit correlation between the parties. We show that higher correlation between shared raw keys at the expense of maximal Bell violation provide for better key rates for low channel disturbance. PMID:27485160

  12. Optimal Operation of Energy Storage in Power Transmission and Distribution

    NASA Astrophysics Data System (ADS)

    Akhavan Hejazi, Seyed Hossein

    In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider

  13. Optimized approach to retrieve information on the tropospheric and stratospheric carbonyl sulfide (OCS) vertical distributions above Jungfraujoch from high-resolution FTIR solar spectra.

    NASA Astrophysics Data System (ADS)

    Lejeune, Bernard; Mahieu, Emmanuel; Servais, Christian; Duchatelet, Pierre; Demoulin, Philippe

    2010-05-01

    Carbonyl sulfide (OCS), which is produced in the troposphere from both biogenic and anthropogenic sources, is the most abundant gaseous sulfur species in the unpolluted atmosphere. Due to its low chemical reactivity and water solubility, a significant fraction of OCS is able to reach the stratosphere where it is converted to SO2 and ultimately to H2SO4 aerosols (Junge layer). These aerosols have the potential to amplify stratospheric ozone destruction on a global scale and may influence Earth's radiation budget and climate through increasing solar scattering. The transport of OCS from troposphere to stratosphere is thought to be the primary mechanism by which the Junge layer is sustained during nonvolcanic periods. Because of this, long-term trends in atmospheric OCS concentration, not only in the troposphere but also in the stratosphere, are of great interest. A new approach has been developed and optimized to retrieve atmospheric abundance of OCS from high-resolution ground-based infrared solar spectra by using the SFIT-2 (v3.91) algorithm, including a new model for solar lines simulation (solar lines often produce significant interferences in the OCS microwindows). The strongest lines of the ν3 fundamental band of OCS at 2062 cm-1 have been systematically evaluated with objective criteria to select a new set of microwindows, assuming the HITRAN 2004 spectroscopic parameters with an increase in the OCS line intensities of the ν3band main isotopologue 16O12C32S by 15.79% as compared to HITRAN 2000 (Rothman et al., 2008, and references therein). Two regularization schemes have further been compared (deducted from ATMOS and ACE-FTS measurements or based on a Tikhonov approach), in order to select the one which optimizes the information content while minimizing the error budget. The selected approach has allowed us to determine updated OCS long-term trend from 1988 to 2009 in both the troposphere and the stratosphere, using spectra recorded on a regular basis with

  14. Distributed fault tolerance in optimal interpolative nets.

    PubMed

    Simon, D

    2001-01-01

    The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper.

  15. Optimal but unequitable prophylactic distribution of vaccine.

    PubMed

    Keeling, Matt J; Shattock, Andrew

    2012-06-01

    The final epidemic size (R(∞)) remains one of the fundamental outcomes of an epidemic, and measures the total number of individuals infected during a "free-fall" epidemic when no additional control action is taken. As such, it provides an idealised measure for optimising control policies before an epidemic arises. Although the generality of formulae for calculating the final epidemic size have been discussed previously, we offer an alternative probabilistic argument and then use this formula to consider the optimal deployment of vaccine in spatially segregated populations that minimises the total number of cases. We show that for a limited stockpile of vaccine, the optimal policy is often to immunise one population to the exclusion of others. However, as greater realism is included, this extreme and arguably unethical policy, is replaced by an optimal strategy where vaccine supply is more evenly spatially distributed.

  16. Structural optimization based on internal energy distribution

    NASA Astrophysics Data System (ADS)

    Öman, Michael; Nilsson, Larsgunnar

    2013-04-01

    Structural optimization is a valuable tool to improve the performance of products, but it is in general expensive to perform due to the required extensive number of function evaluations. Therefore, an approximate method based on the internal energy distribution, which only requires a small number of function evaluations, is presented here. By this method, structural optimization can be enabled already in the initial steps in the design of new products when fast, but not necessarily precise, results are often desired. However, the accuracy of the approximate solution depends on the structural behaviour. The internal energy based optimization method is here validated for three structures, but it is believed to be applicable to any structure subjected to a single load where the functions considered are related to the displacement of the loaded area and/or the material thicknesses of the structural parts.

  17. Numerical approach for unstructured quantum key distribution.

    PubMed

    Coles, Patrick J; Metodiev, Eric M; Lütkenhaus, Norbert

    2016-05-20

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study 'unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown.

  18. Numerical approach for unstructured quantum key distribution.

    PubMed

    Coles, Patrick J; Metodiev, Eric M; Lütkenhaus, Norbert

    2016-01-01

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study 'unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739

  19. Numerical approach for unstructured quantum key distribution

    PubMed Central

    Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert

    2016-01-01

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study ‘unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739

  20. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  1. Optimal design of spatial distribution networks

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.; Newman, M. E. J.

    2006-07-01

    We consider the problem of constructing facilities such as hospitals, airports, or malls in a country with a nonuniform population density, such that the average distance from a person’s home to the nearest facility is minimized. We review some previous approximate treatments of this problem that indicate that the optimal distribution of facilities should have a density that increases with population density, but does so slower than linearly, as the two-thirds power. We confirm this result numerically for the particular case of the United States with recent population data using two independent methods, one a straightforward regression analysis, the other based on density-dependent map projections. We also consider strategies for linking the facilities to form a spatial network, such as a network of flights between airports, so that the combined cost of maintenance of and travel on the network is minimized. We show specific examples of such optimal networks for the case of the United States.

  2. Multicriterial approach to beam dynamics optimization problem

    NASA Astrophysics Data System (ADS)

    Vladimirova, L. V.

    2016-09-01

    The problem of optimization of particle beam dynamics in accelerating system is considered in the case when control process quality is estimated by several functionals. Multicriterial approach is used. When there are two criteria, compromise curve may be obtained. If the number of criteria is three or more, one can select some criteria to be main and impose the constraints on the remaining criteria. The optimization result is the set of efficient controls; a user has an opportunity to select the most appropriate control among them. The paper presents the results of multicriteria optimization of beam dynamics in linear accelerator LEA-15-M.

  3. Optimizing distribution of pandemic influenza antiviral drugs.

    PubMed

    Singh, Bismark; Huang, Hsin-Chan; Morton, David P; Johnson, Gregory P; Gutfraind, Alexander; Galvani, Alison P; Clements, Bruce; Meyers, Lauren A

    2015-02-01

    We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858

  4. Material Distribution Optimization for the Shell Aircraft Composite Structure

    NASA Astrophysics Data System (ADS)

    Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.

    2016-09-01

    One of the main goal in aircraft structures designing isweight decreasing and stiffness increasing. Composite structures recently became popular in aircraft because of their mechanical properties and wide range of optimization possibilities.Weight distribution and lay-up are keys to creating lightweight stiff strictures. In this paperwe discuss optimization of specific structure that undergoes the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflowinduced vibrations at the constrained weight of the part. Initial model was created with CAD tool Siemens NX, finite element analysis and post processing were performed with COMSOL Multiphysicsr and MATLABr. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. Wall thickness has been changed using parametric approach by an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. To avoid a local stress concentration, wall thickness increment was defined as smooth function on the shell surface dependent of auxiliary sphere position and size. Our study consists of multiple steps: CAD/CAE transformation of the model, determining wind pressure for different flow angles, optimizing wall thickness distribution for specific flow angles, designing a lay-up for optimal material distribution. The studied structure was improved in terms of maximum and average strain energy at the constrained expense ofweight growth. Developed methods and tools can be applied to wide range of shell-like structures made of multilayered quasi-isotropic laminates.

  5. Electricity distribution networks: Changing regulatory approaches

    NASA Astrophysics Data System (ADS)

    Cambini, Carlo

    2016-09-01

    Increasing the penetration of distributed generation and smart grid technologies requires substantial investments. A study proposes an innovative approach that combines four regulatory tools to provide economic incentives for distribution system operators to facilitate these innovative practices.

  6. Quantum Resonance Approach to Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  7. Optimizing the Distribution of Leg Muscles for Vertical Jumping.

    PubMed

    Wong, Jeremy D; Bobbert, Maarten F; van Soest, Arthur J; Gribble, Paul L; Kistemaker, Dinant A

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas-which determine the maximum force deliverable by the muscles-constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal

  8. Optimizing the Distribution of Leg Muscles for Vertical Jumping

    PubMed Central

    Wong, Jeremy D.; Bobbert, Maarten F.; van Soest, Arthur J.; Gribble, Paul L.; Kistemaker, Dinant A.

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas—which determine the maximum force deliverable by the muscles—constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal

  9. Optimizing the Distribution of Leg Muscles for Vertical Jumping.

    PubMed

    Wong, Jeremy D; Bobbert, Maarten F; van Soest, Arthur J; Gribble, Paul L; Kistemaker, Dinant A

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas-which determine the maximum force deliverable by the muscles-constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal

  10. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    SciTech Connect

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.; Giannakis, Georgios B.

    2015-10-05

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this paper shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.

  11. Steam distribution and energy delivery optimization using wireless sensors

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Allgood, Glenn O.; Kuruganti, Teja P.; Sukumar, Sreenivas R.; Djouadi, Seddik M.; Lake, Joe E.

    2011-05-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  12. Steam distribution and energy delivery optimization using wireless sensors

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Djouadi, Seddik M; Lake, Joe E

    2011-01-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  13. Multidisciplinary Approach to Linear Aerospike Nozzle Optimization

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Salas, A. O.; Dunn, H. J.; Alexandrov, N. M.; Follett, W. W.; Orient, G. E.; Hadid, A. H.

    1997-01-01

    A model of a linear aerospike rocket nozzle that consists of coupled aerodynamic and structural analyses has been developed. A nonlinear computational fluid dynamics code is used to calculate the aerodynamic thrust, and a three-dimensional fink-element model is used to determine the structural response and weight. The model will be used to demonstrate multidisciplinary design optimization (MDO) capabilities for relevant engine concepts, assess performance of various MDO approaches, and provide a guide for future application development. In this study, the MDO problem is formulated using the multidisciplinary feasible (MDF) strategy. The results for the MDF formulation are presented with comparisons against sequential aerodynamic and structural optimized designs. Significant improvements are demonstrated by using a multidisciplinary approach in comparison with the single- discipline design strategy.

  14. Cancer Behavior: An Optimal Control Approach

    PubMed Central

    Gutiérrez, Pedro J.; Russo, Irma H.; Russo, J.

    2009-01-01

    With special attention to cancer, this essay explains how Optimal Control Theory, mainly used in Economics, can be applied to the analysis of biological behaviors, and illustrates the ability of this mathematical branch to describe biological phenomena and biological interrelationships. Two examples are provided to show the capability and versatility of this powerful mathematical approach in the study of biological questions. The first describes a process of organogenesis, and the second the development of tumors. PMID:22247736

  15. A Bayesian approach to optimizing cryopreservation protocols.

    PubMed

    Sambu, Sammy

    2015-01-01

    Cryopreservation is beset with the challenge of protocol alignment across a wide range of cell types and process variables. By taking a cross-sectional assessment of previously published cryopreservation data (sample means and standard errors) as preliminary meta-data, a decision tree learning analysis (DTLA) was performed to develop an understanding of target survival using optimized pruning methods based on different approaches. Briefly, a clear direction on the decision process for selection of methods was developed with key choices being the cooling rate, plunge temperature on the one hand and biomaterial choice, use of composites (sugars and proteins as additional constituents), loading procedure and cell location in 3D scaffolding on the other. Secondly, using machine learning and generalized approaches via the Naïve Bayes Classification (NBC) method, these metadata were used to develop posterior probabilities for combinatorial approaches that were implicitly recorded in the metadata. These latter results showed that newer protocol choices developed using probability elicitation techniques can unearth improved protocols consistent with multiple unidimensionally-optimized physical protocols. In conclusion, this article proposes the use of DTLA models and subsequently NBC for the improvement of modern cryopreservation techniques through an integrative approach.

  16. Optimization approaches for planning external beam radiotherapy

    NASA Astrophysics Data System (ADS)

    Gozbasi, Halil Ozan

    Cancer begins when cells grow out of control as a result of damage to their DNA. These abnormal cells can invade healthy tissue and form tumors in various parts of the body. Chemotherapy, immunotherapy, surgery and radiotherapy are the most common treatment methods for cancer. According to American Cancer Society about half of the cancer patients receive a form of radiation therapy at some stage. External beam radiotherapy is delivered from outside the body and aimed at cancer cells to damage their DNA making them unable to divide and reproduce. The beams travel through the body and may damage nearby healthy tissue unless carefully planned. Therefore, the goal of treatment plan optimization is to find the best system parameters to deliver sufficient dose to target structures while avoiding damage to healthy tissue. This thesis investigates optimization approaches for two external beam radiation therapy techniques: Intensity-Modulated Radiation Therapy (IMRT) and Volumetric-Modulated Arc Therapy (VMAT). We develop automated treatment planning technology for IMRT that produces several high-quality treatment plans satisfying provided clinical requirements in a single invocation and without human guidance. A novel bi-criteria scoring based beam selection algorithm is part of the planning system and produces better plans compared to those produced using a well-known scoring-based algorithm. Our algorithm is very efficient and finds the beam configuration at least ten times faster than an exact integer programming approach. Solution times range from 2 minutes to 15 minutes which is clinically acceptable. With certain cancers, especially lung cancer, a patient's anatomy changes during treatment. These anatomical changes need to be considered in treatment planning. Fortunately, recent advances in imaging technology can provide multiple images of the treatment region taken at different points of the breathing cycle, and deformable image registration algorithms can

  17. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  18. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  19. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  20. Root approach for estimation of statistical distributions

    NASA Astrophysics Data System (ADS)

    Bogdanov, Yu. I.; Bogdanova, N. A.

    2014-12-01

    Application of root density estimator to problems of statistical data analysis is demonstrated. Four sets of basis functions based on Chebyshev-Hermite, Laguerre, Kravchuk and Charlier polynomials are considered. The sets may be used for numerical analysis in problems of reconstructing statistical distributions by experimental data. Based on the root approach to reconstruction of statistical distributions and quantum states, we study a family of statistical distributions in which the probability density is the product of a Gaussian distribution and an even-degree polynomial. Examples of numerical modeling are given.

  1. Optimization approaches to nonlinear model predictive control

    SciTech Connect

    Biegler, L.T. . Dept. of Chemical Engineering); Rawlings, J.B. . Dept. of Chemical Engineering)

    1991-01-01

    With the development of sophisticated methods for nonlinear programming and powerful computer hardware, it now becomes useful and efficient to formulate and solve nonlinear process control problems through on-line optimization methods. This paper explores and reviews control techniques based on repeated solution of nonlinear programming (NLP) problems. Here several advantages present themselves. These include minimization of readily quantifiable objectives, coordinated and accurate handling of process nonlinearities and interactions, and systematic ways of dealing with process constraints. We motivate this NLP-based approach with small nonlinear examples and present a basic algorithm for optimization-based process control. As can be seen this approach is a straightforward extension of popular model-predictive controllers (MPCs) that are used for linear systems. The statement of the basic algorithm raises a number of questions regarding stability and robustness of the method, efficiency of the control calculations, incorporation of feedback into the controller and reliable ways of handling process constraints. Each of these will be treated through analysis and/or modification of the basic algorithm. To highlight and support this discussion, several examples are presented and key results are examined and further developed. 74 refs., 11 figs.

  2. The discrete adjoint approach to aerodynamic shape optimization

    NASA Astrophysics Data System (ADS)

    Nadarajah, Siva Kumaran

    A viscous discrete adjoint approach to automatic aerodynamic shape optimization is developed, and the merits of the viscous discrete and continuous adjoint approaches are discussed. The viscous discrete and continuous adjoint gradients for inverse design and drag minimization cost functions are compared with finite-difference and complex-step gradients. The optimization of airfoils in two-dimensional flow for inverse design and drag minimization is illustrated. Both the discrete and continuous adjoint methods are used to formulate two new design problems. First, the time-dependent optimal design problem is established, and both the time accurate discrete and continuous adjoint equations are derived. An application to the reduction of the time-averaged drag coefficient while maintaining time-averaged lift and thickness distribution of a pitching airfoil in transonic flow is demonstrated. Second, the remote inverse design problem is formulated. The optimization of a three-dimensional biconvex wing in supersonic flow verifies the feasibility to reduce the near field pressure peak. Coupled drag minimization and remote inverse design cases produce wings with a lower drag and a reduced near field peak pressure signature.

  3. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  4. Optimality of collective choices: a stochastic approach.

    PubMed

    Nicolis, S C; Detrain, C; Demolin, D; Deneubourg, J L

    2003-09-01

    Amplifying communication is a characteristic of group-living animals. This study is concerned with food recruitment by chemical means, known to be associated with foraging in most ant colonies but also with defence or nest moving. A stochastic approach of collective choices made by ants faced with different sources is developed to account for the fluctuations inherent to the recruitment process. It has been established that ants are able to optimize their foraging by selecting the most rewarding source. Our results not only confirm that selection is the result of a trail modulation according to food quality but also show the existence of an optimal quantity of laid pheromone for which the selection of a source is at the maximum, whatever the difference between the two sources might be. In terms of colony size, large colonies more easily focus their activity on one source. Moreover, the selection of the rich source is more efficient if many individuals lay small quantities of pheromone, instead of a small group of individuals laying a higher trail amount. These properties due to the stochasticity of the recruitment process can be extended to other social phenomena in which competition between different sources of information occurs. PMID:12909251

  5. Optimal distributions for multiplex logistic networks

    NASA Astrophysics Data System (ADS)

    Solá Conde, Luis E.; Used, Javier; Romance, Miguel

    2016-06-01

    This paper presents some mathematical models for distribution of goods in logistic networks based on spectral analysis of complex networks. Given a steady distribution of a finished product, some numerical algorithms are presented for computing the weights in a multiplex logistic network that reach the equilibrium dynamics with high convergence rate. As an application, the logistic networks of Germany and Spain are analyzed in terms of their convergence rates.

  6. Optimal distributions for multiplex logistic networks.

    PubMed

    Solá Conde, Luis E; Used, Javier; Romance, Miguel

    2016-06-01

    This paper presents some mathematical models for distribution of goods in logistic networks based on spectral analysis of complex networks. Given a steady distribution of a finished product, some numerical algorithms are presented for computing the weights in a multiplex logistic network that reach the equilibrium dynamics with high convergence rate. As an application, the logistic networks of Germany and Spain are analyzed in terms of their convergence rates.

  7. Optimal distributions for multiplex logistic networks.

    PubMed

    Solá Conde, Luis E; Used, Javier; Romance, Miguel

    2016-06-01

    This paper presents some mathematical models for distribution of goods in logistic networks based on spectral analysis of complex networks. Given a steady distribution of a finished product, some numerical algorithms are presented for computing the weights in a multiplex logistic network that reach the equilibrium dynamics with high convergence rate. As an application, the logistic networks of Germany and Spain are analyzed in terms of their convergence rates. PMID:27368801

  8. Optimality of nitrogen distribution among leaves in plant canopies.

    PubMed

    Hikosaka, Kouki

    2016-05-01

    The vertical gradient of the leaf nitrogen content in a plant canopy is one of the determinants of vegetation productivity. The ecological significance of the nitrogen distribution in plant canopies has been discussed in relation to its optimality; nitrogen distribution in actual plant canopies is close to but always less steep than the optimal distribution that maximizes canopy photosynthesis. In this paper, I review the optimality of nitrogen distribution within canopies focusing on recent advancements. Although the optimal nitrogen distribution has been believed to be proportional to the light gradient in the canopy, this rule holds only when diffuse light is considered; the optimal distribution is steeper when the direct light is considered. A recent meta-analysis has shown that the nitrogen gradient is similar between herbaceous and tree canopies when it is expressed as the function of the light gradient. Various hypotheses have been proposed to explain why nitrogen distribution is suboptimal. However, hypotheses explain patterns observed in some specific stands but not in others; there seems to be no general hypothesis that can explain the nitrogen distributions under different conditions. Therefore, how the nitrogen distribution in canopies is determined remains open for future studies; its understanding should contribute to the correct prediction and improvement of plant productivity under changing environments. PMID:27059755

  9. Inversion of generalized relaxation time distributions with optimized damping parameter

    NASA Astrophysics Data System (ADS)

    Florsch, Nicolas; Revil, André; Camerlynck, Christian

    2014-10-01

    Retrieving the Relaxation Time Distribution (RDT), the Grains Size Distribution (GSD) or the Pore Size Distribution (PSD) from low-frequency impedance spectra is a major goal in geophysics. The “Generalized RTD” generalizes parametric models like Cole-Cole and many others, but remains tricky to invert since this inverse problem is ill-posed. We propose to use generalized relaxation basis function (for instance by decomposing the spectra on basis of generalized Cole-Cole relaxation elements instead of the classical Debye basis) and to use the L-curve approach to optimize the damping parameter required to get smooth and realistic inverse solutions. We apply our algorithm to three examples, one synthetic and two real data sets, and the program includes the possibility of converting the RTD into GSD or PSD by choosing the value of the constant connecting the relaxation time to the characteristic polarization size of interest. A high frequencies (typically above 1 kHz), a dielectric term in taken into account in the model. The code is provided as an open Matlab source as a supplementary file associated with this paper.

  10. Inspection-Repair based Availability Optimization of Distribution Systems using Teaching Learning based Optimization

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2016-09-01

    This paper describes a technique for optimizing inspection and repair based availability of distribution systems. Optimum duration between two inspections has been obtained for each feeder section with respect to cost function and subject to satisfaction of availability at each load point. Teaching learning based optimization has been used for availability optimization. The developed algorithm has been implemented on radial and meshed distribution systems. The result obtained has been compared with those obtained with differential evolution.

  11. Parallel Harmony Search Based Distributed Energy Resource Optimization

    SciTech Connect

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  12. Optimal distributed solution to the dining philosophers problem

    SciTech Connect

    Rana, S.P.; Banerji, D.K.

    1986-08-01

    An optimal distributed solution to the dining philosophers problem is presented. The solution is optimal in the sense that it incurs the least communication and computational overhead, and allows the maximum achievable concurrency. The worst case upper bound for concurrency is shown to be n div 3, n being the number of philosophers. There is no previous algorithm known to achieve this bound.

  13. Distributed optimization of resource allocation for search and track assignment with multifunction radars

    NASA Astrophysics Data System (ADS)

    Severson, Tracie Andrusiak

    The long-term goal of this research is to contribute to the design of a conceptual architecture and framework for the distributed coordination of multifunction radar systems. The specific research objective of this dissertation is to apply results from graph theory, probabilistic optimization, and consensus control to the problem of distributed optimization of resource allocation for multifunction radars coordinating on their search and track assignments. For multiple radars communicating on a radar network, cooperation and agreement on a network resource management strategy increases the group's collective search and track capability as compared to non-cooperative radars. Existing resource management approaches for a single multifunction radar optimize the radar's configuration by modifying the radar waveform and beam-pattern. Also, multi-radar approaches implement a top-down, centralized sensor management framework that relies on fused sensor data, which may be impractical due to bandwidth constraints. This dissertation presents a distributed radar resource optimization approach for a network of multifunction radars. Linear and nonlinear models estimate the resource allocation for multifunction radar search and track functions. Interactions between radars occur over time-invariant balanced graphs that may be directed or undirected. The collective search area and target-assignment solution for coordinated radars is optimized by balancing resource usage across the radar network and minimizing total resource usage. Agreement on the global optimal target-assignment solution is ensured using a distributed binary consensus algorithm. Monte Carlo simulations validate the coordinated approach over uncoordinated alternatives.

  14. Energy optimization of water distribution system

    SciTech Connect

    Not Available

    1993-02-01

    In order to analyze pump operating scenarios for the system with the computer model, information on existing pumping equipment and the distribution system was collected. The information includes the following: component description and design criteria for line booster stations, booster stations with reservoirs, and high lift pumps at the water treatment plants; daily operations data for 1988; annual reports from fiscal year 1987/1988 to fiscal year 1991/1992; and a 1985 calibrated KYPIPE computer model of DWSD`s water distribution system which included input data for the maximum hour and average day demands on the system for that year. This information has been used to produce the inventory database of the system and will be used to develop the computer program to analyze the system.

  15. A two-stage sequential linear programming approach to IMRT dose optimization

    PubMed Central

    Zhang, Hao H; Meyer, Robert R; Wu, Jianzhou; Naqvi, Shahid A; Shi, Leyuan; D’Souza, Warren D

    2010-01-01

    The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte-Carlo based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically. PMID:20071764

  16. The Relationship between Distributed Leadership and Teachers' Academic Optimism

    ERIC Educational Resources Information Center

    Mascall, Blair; Leithwood, Kenneth; Straus, Tiiu; Sacks, Robin

    2008-01-01

    Purpose: The goal of this study was to examine the relationship between four patterns of distributed leadership and a modified version of a variable Hoy et al. have labeled "teachers' academic optimism." The distributed leadership patterns reflect the extent to which the performance of leadership functions is consciously aligned across the sources…

  17. Optimal Reward Functions in Distributed Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan

    2000-01-01

    We consider the design of multi-agent systems so as to optimize an overall world utility function when (1) those systems lack centralized communication and control, and (2) each agents runs a distinct Reinforcement Learning (RL) algorithm. A crucial issue in such design problems is to initialize/update each agent's private utility function, so as to induce best possible world utility. Traditional 'team game' solutions to this problem sidestep this issue and simply assign to each agent the world utility as its private utility function. In previous work we used the 'Collective Intelligence' framework to derive a better choice of private utility functions, one that results in world utility performance up to orders of magnitude superior to that ensuing from use of the team game utility. In this paper we extend these results. We derive the general class of private utility functions that both are easy for the individual agents to learn and that, if learned well, result in high world utility. We demonstrate experimentally that using these new utility functions can result in significantly improved performance over that of our previously proposed utility, over and above that previous utility's superiority to the conventional team game utility.

  18. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  19. Optimization and integration of LED array for uniform illumination distribution

    NASA Astrophysics Data System (ADS)

    Wu, Ding-hui; Wang, Jia-wen; Su, Zhou-ping

    2014-09-01

    A design method for light-emitting diode (LED) array is proposed to achieve a good uniform illumination distribution on target plane. By using random walk algorithm, the basic LED array modules are optimized firstly. The optimized basic arrays can generate uniform illumination distribution on their target plane. The optimized basic LED array modules can be integrated into a large LED array module with more than tens of LEDs. In the large array, we can select a sub-array with K LEDs ( K>7), which can produce the good uniform illumination distribution. By this way, we design two LED arrays which consist of 21 and 25 LEDs, respectively. The 21-LED array and 25-LED array can generate uniform illumination distributions with the uniformities of 95% and 90%, respectively.

  20. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  1. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  2. A multiple objective optimization approach to aircraft control systems design

    NASA Technical Reports Server (NTRS)

    Tabak, D.; Schy, A. A.; Johnson, K. G.; Giesy, D. P.

    1979-01-01

    The design of an aircraft lateral control system, subject to several performance criteria and constraints, is considered. While in the previous studies of the same model a single criterion optimization, with other performance requirements expressed as constraints, has been pursued, the current approach involves a multiple criteria optimization. In particular, a Pareto optimal solution is sought.

  3. Modeling diffuse pollution with a distributed approach.

    PubMed

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2002-01-01

    The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.

  4. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  5. Group Counseling Optimization: A Novel Approach

    NASA Astrophysics Data System (ADS)

    Eita, M. A.; Fahmy, M. M.

    A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.

  6. Conjunctive Multibasin Management: An Optimal Control Approach

    NASA Astrophysics Data System (ADS)

    Noel, Jay E.; Howitt, Richard E.

    1982-08-01

    The economic effects of conjunctive management of ground and surface water supplies for irrigation are formulated as an optimal control model. An empirical hydroeconomic model is estimated for the Yolo County district in California. Two alternative solution methodologies (analytic Riccatti and mathematical programing) are applied and compared. Results show the economic potential for interbasin transfers and the impact of increased electricity prices on optimal groundwater management.

  7. Multi-objective optimal dispatch of distributed energy resources

    NASA Astrophysics Data System (ADS)

    Longe, Ayomide

    This thesis is composed of two papers which investigate the optimal dispatch for distributed energy resources. In the first paper, an economic dispatch problem for a community microgrid is studied. In this microgrid, each agent pursues an economic dispatch for its personal resources. In addition, each agent is capable of trading electricity with other agents through a local energy market. In this paper, a simple market structure is introduced as a framework for energy trades in a small community microgrid such as the Solar Village. It was found that both sellers and buyers benefited by participating in this market. In the second paper, Semidefinite Programming (SDP) for convex relaxation of power flow equations is used for optimal active and reactive dispatch for Distributed Energy Resources (DER). Various objective functions including voltage regulation, reduced transmission line power losses, and minimized reactive power charges for a microgrid are introduced. Combinations of these goals are attained by solving a multiobjective optimization for the proposed ORPD problem. Also, both centralized and distributed versions of this optimal dispatch are investigated. It was found that SDP made the optimal dispatch faster and distributed solution allowed for scalability.

  8. Distributed Generation Planning using Peer Enhanced Multi-objective Teaching-Learning based Optimization in Distribution Networks

    NASA Astrophysics Data System (ADS)

    Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth

    2016-06-01

    In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.

  9. Optimization of composite structures by estimation of distribution algorithms

    NASA Astrophysics Data System (ADS)

    Grosset, Laurent

    The design of high performance composite laminates, such as those used in aerospace structures, leads to complex combinatorial optimization problems that cannot be addressed by conventional methods. These problems are typically solved by stochastic algorithms, such as evolutionary algorithms. This dissertation proposes a new evolutionary algorithm for composite laminate optimization, named Double-Distribution Optimization Algorithm (DDOA). DDOA belongs to the family of estimation of distributions algorithms (EDA) that build a statistical model of promising regions of the design space based on sets of good points, and use it to guide the search. A generic framework for introducing statistical variable dependencies by making use of the physics of the problem is proposed. The algorithm uses two distributions simultaneously: the marginal distributions of the design variables, complemented by the distribution of auxiliary variables. The combination of the two generates complex distributions at a low computational cost. The dissertation demonstrates the efficiency of DDOA for several laminate optimization problems where the design variables are the fiber angles and the auxiliary variables are the lamination parameters. The results show that its reliability in finding the optima is greater than that of a simple EDA and of a standard genetic algorithm, and that its advantage increases with the problem dimension. A continuous version of the algorithm is presented and applied to a constrained quadratic problem. Finally, a modification of the algorithm incorporating probabilistic and directional search mechanisms is proposed. The algorithm exhibits a faster convergence to the optimum and opens the way for a unified framework for stochastic and directional optimization.

  10. New approaches to the design optimization of hydrofoils

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya; Meneghello, Gianluca; Bewley, Thomas

    2015-11-01

    Two simulation-based approaches are developed to optimize the design of hydrofoils for foiling catamarans, with the objective of maximizing efficiency (lift/drag). In the first, a simple hydrofoil model based on the vortex-lattice method is coupled with a hybrid global and local optimization algorithm that combines our Delaunay-based optimization algorithm with a Generalized Pattern Search. This optimization procedure is compared with the classical Newton-based optimization method. The accuracy of the vortex-lattice simulation of the optimized design is compared with a more accurate and computationally expensive LES-based simulation. In the second approach, the (expensive) LES model of the flow is used directly during the optimization. A modified Delaunay-based optimization algorithm is used to maximize the efficiency of the optimization, which measures a finite-time averaged approximation of the infinite-time averaged value of an ergodic and stationary process. Since the optimization algorithm takes into account the uncertainty of the finite-time averaged approximation of the infinite-time averaged statistic of interest, the total computational time of the optimization algorithm is significantly reduced. Results from the two different approaches are compared.

  11. Russian Loanword Adaptation in Persian; Optimal Approach

    ERIC Educational Resources Information Center

    Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat

    2011-01-01

    In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…

  12. System approach to distributed sensor management

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid

    2010-04-01

    Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.

  13. Optimal cloning for finite distributions of coherent states

    SciTech Connect

    Cochrane, P.T.; Ralph, T.C.; Dolinska, A.

    2004-04-01

    We derive optimal cloning limits for finite Gaussian distributions of coherent states and describe techniques for achieving them. We discuss the relation of these limits to state estimation and the no-cloning limit in teleportation. A qualitatively different cloning limit is derived for a single-quadrature Gaussian quantum cloner.

  14. Simulation based flow distribution network optimization for vacuum assisted resin transfer moulding process

    NASA Astrophysics Data System (ADS)

    Hsiao, Kuang-Ting; Devillard, Mathieu; Advani, Suresh G.

    2004-05-01

    In the vacuum assisted resin transfer moulding (VARTM) process, using a flow distribution network such as flow channels and high permeability fabrics can accelerate the resin infiltration of the fibre reinforcement during the manufacture of composite parts. The flow distribution network significantly influences the fill time and fill pattern and is essential for the process design. The current practice has been to cover the top surface of the fibre preform with the distribution media with the hope that the resin will flood the top surface immediately and penetrate through the thickness. However, this approach has some drawbacks. One is when the resin finds its way to the vent before it has penetrated the preform entirely, which results in a defective part or resin wastage. Also, if the composite structure contains ribs or inserts, this approach invariably results in dry spots. Instead of this intuitive approach, we propose a science-based approach to design the layout of the distribution network. Our approach uses flow simulation of the resin into the network and the preform and a genetic algorithm to optimize the flow distribution network. An experimental case study of a co-cured rib structure is conducted to demonstrate the design procedure and validate the optimized flow distribution network design. Good agreement between the flow simulations and the experimental results was observed. It was found that the proposed design algorithm effectively optimized the flow distribution network of the part considered in our case study and hence should prove to be a useful tool to extend the VARTM process to manufacture of complex structures with effective use of the distribution network layup.

  15. Optimization and capacity expansion of a water distribution system

    NASA Astrophysics Data System (ADS)

    Hsu, Nien-Sheng; Cheng, Wei-Chen; Cheng, Wen-Ming; Wei, Chih-Chiang; Yeh, William W.-G.

    2008-05-01

    This paper develops an iterative procedure for capacity expansion studies for water distribution systems. We propose a methodology to analyze an existing water distribution system and identify the potential bottlenecks in the system. Based on the results, capacity expansion alternatives are proposed and evaluated for improving the efficiency of water supply. The methodology includes a network flow based optimization model, four evaluation indices, and a series of evaluation steps. We first use a directed graph to configure the water distribution system into a network. The network flow based model optimizes the water distribution in the system so that different expansion alternatives can be evaluated on a comparable basis. This model lends itself to linear programming (LP) and can be easily solved by a standard LP code. The results from the evaluation tool help to identify the bottlenecks in the water distribution system and provide capacity expansion alternatives. A useful complementary tool for decision making is composed of a series of evaluation steps with the bottleneck findings, capacity expansion alternatives, and the evaluation of results. We apply the proposed methodology to the Tou-Qian River Basin, located in the northern region of Taiwan, to demonstrate its applicability in optimization and capacity expansion studies.

  16. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  17. Drug discovery: selecting the optimal approach.

    PubMed

    Sams-Dodd, Frank

    2006-05-01

    The target-based drug discovery approach has for the past 10-15 years been the dominating drug discovery paradigm. However, within the past few years, the commercial value of novel targets in licensing deals has fallen dramatically, reflecting that the probability of reaching a clinical drug candidate for a novel target is very low. This has naturally led to questions regarding the success of target-based drug discovery and, more importantly, a search for alternatives. This paper evaluates the strengths and limitations of the main drug discovery approaches, and proposes a novel approach that could offer advantages for the identification of disease-modifying treatments.

  18. Distributing structural optimization software between a mainframe and a minicomputer

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Dovi, A. R.; Riley, K. M.

    1981-01-01

    This paper describes a distributed software system for solving large-scale structural optimization problems. Distributing the software between a mainframe computer and a minicomputer takes advantage of some of the best features available on each computer. The described software system consists of a finite element structural analysis computer program, a general purpose optimizer program, and several small user-supplied problem dependent programs. Comparison with a similar system executing entirely on the mainframe computer reveals that the distributed system costs less, uses computer resources more efficiently and improves production through faster turnaround and improved user control. The system interfaces with interactive graphics software for generating models and displaying the intermediate and final results

  19. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations.

  20. Molecular Approaches for Optimizing Vitamin D Supplementation.

    PubMed

    Carlberg, Carsten

    2016-01-01

    Vitamin D can be synthesized endogenously within UV-B exposed human skin. However, avoidance of sufficient sun exposure via predominant indoor activities, textile coverage, dark skin at higher latitude, and seasonal variations makes the intake of vitamin D fortified food or direct vitamin D supplementation necessary. Vitamin D has via its biologically most active metabolite 1α,25-dihydroxyvitamin D and the transcription factor vitamin D receptor a direct effect on the epigenome and transcriptome of many human tissues and cell types. Different interpretation of results from observational studies with vitamin D led to some dispute in the field on the desired optimal vitamin D level and the recommended daily supplementation. This chapter will provide background on the epigenome- and transcriptome-wide functions of vitamin D and will outline how this insight may be used for determining of the optimal vitamin D status of human individuals. These reflections will lead to the concept of a personal vitamin D index that may be a better guideline for an optimized vitamin D supplementation than population-based recommendations. PMID:26827955

  1. Selection of Reserves for Woodland Caribou Using an Optimization Approach

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation. PMID:22363702

  2. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  3. Scalar and Multivariate Approaches for Optimal Network Design in Antarctica

    NASA Astrophysics Data System (ADS)

    Hryniw, Natalia

    Observations are crucial for weather and climate, not only for daily forecasts and logistical purposes, for but maintaining representative records and for tuning atmospheric models. Here scalar theory for optimal network design is expanded in a multivariate framework, to allow for optimal station siting for full field optimization. Ensemble sensitivity theory is expanded to produce the covariance trace approach, which optimizes for the trace of the covariance matrix. Relative entropy is also used for multivariate optimization as an information theory approach for finding optimal locations. Antarctic surface temperature data is used as a testbed for these methods. Both methods produce different results which are tied to the fundamental physical parameters of the Antarctic temperature field.

  4. Distribution function approach to redshift space distortions

    SciTech Connect

    Seljak, Uroš; McDonald, Patrick E-mail: pvmcdonald@lbl.gov

    2011-11-01

    We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansion of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter

  5. A system approach to aircraft optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.

  6. Applications of the theory of optimal control of distributed-parameter systems to structural optimization

    NASA Technical Reports Server (NTRS)

    Armand, J. P.

    1972-01-01

    An extension of classical methods of optimal control theory for systems described by ordinary differential equations to distributed-parameter systems described by partial differential equations is presented. An application is given involving the minimum-mass design of a simply-supported shear plate with a fixed fundamental frequency of vibration. An optimal plate thickness distribution in analytical form is found. The case of a minimum-mass design of an elastic sandwich plate whose fundamental frequency of free vibration is fixed. Under the most general conditions, the optimization problem reduces to the solution of two simultaneous partial differential equations involving the optimal thickness distribution and the modal displacement. One equation is the uniform energy distribution expression which was found by Ashley and McIntosh for the optimal design of one-dimensional structures with frequency constraints, and by Prager and Taylor for various design criteria in one and two dimensions. The second equation requires dynamic equilibrium at the preassigned vibration frequency.

  7. Optimization of an Aeroservoelastic Wing with Distributed Multiple Control Surfaces

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.

    2015-01-01

    This paper considers the aeroelastic optimization of a subsonic transport wingbox under a variety of static and dynamic aeroelastic constraints. Three types of design variables are utilized: structural variables (skin thickness, stiffener details), the quasi-steady deflection scheduling of a series of control surfaces distributed along the trailing edge for maneuver load alleviation and trim attainment, and the design details of an LQR controller, which commands oscillatory hinge moments into those same control surfaces. Optimization problems are solved where a closed loop flutter constraint is forced to satisfy the required flight margin, and mass reduction benefits are realized by relaxing the open loop flutter requirements.

  8. Optimization approaches to volumetric modulated arc therapy planning

    SciTech Connect

    Unkelbach, Jan Bortfeld, Thomas; Craft, David; Alber, Markus; Bangert, Mark; Bokrantz, Rasmus; Chen, Danny; Li, Ruijiang; Xing, Lei; Men, Chunhua; Nill, Simeon; Papp, Dávid; Romeijn, Edwin; Salari, Ehsan

    2015-03-15

    Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.

  9. Universal scaling of optimal current distribution in transportation networks.

    PubMed

    Simini, Filippo; Rinaldo, Andrea; Maritan, Amos

    2009-04-01

    Transportation networks are inevitably selected with reference to their global cost which depends on the strengths and the distribution of the embedded currents. We prove that optimal current distributions for a uniformly injected d -dimensional network exhibit robust scale-invariance properties, independently of the particular cost function considered, as long as it is convex. We find that, in the limit of large currents, the distribution decays as a power law with an exponent equal to (2d-1)/(d-1). The current distribution can be exactly calculated in d=2 for all values of the current. Numerical simulations further suggest that the scaling properties remain unchanged for both random injections and by randomizing the convex cost functions. PMID:19518304

  10. Universal scaling of optimal current distribution in transportation networks.

    PubMed

    Simini, Filippo; Rinaldo, Andrea; Maritan, Amos

    2009-04-01

    Transportation networks are inevitably selected with reference to their global cost which depends on the strengths and the distribution of the embedded currents. We prove that optimal current distributions for a uniformly injected d -dimensional network exhibit robust scale-invariance properties, independently of the particular cost function considered, as long as it is convex. We find that, in the limit of large currents, the distribution decays as a power law with an exponent equal to (2d-1)/(d-1). The current distribution can be exactly calculated in d=2 for all values of the current. Numerical simulations further suggest that the scaling properties remain unchanged for both random injections and by randomizing the convex cost functions.

  11. Architectures, stability and optimization for clock distribution networks

    NASA Astrophysics Data System (ADS)

    Carareto, Rodrigo; Orsatti, Fernando M.; Piqueira, José Roberto C.

    2012-12-01

    Synchronous telecommunication networks, distributed control systems and integrated circuits have its accuracy of operation dependent on the existence of a reliable time basis signal extracted from the line data stream and acquirable to each node. In this sense, the existence of a sub-network (inside the main network) dedicated to the distribution of the clock signals is crucially important. There are different solutions for the architecture of the time distribution sub-network and choosing one of them depends on cost, precision, reliability and operational security. In this work we expose: (i) the possible time distribution networks and their usual topologies and arrangements. (ii) How parameters of the network nodes can affect the reachability and stability of the synchronous state of a network. (iii) Optimizations methods for synchronous networks which can provide low cost architectures with operational precision, reliability and security.

  12. Approaches for Informing Optimal Dose of Behavioral Interventions

    PubMed Central

    King, Heather A.; Maciejewski, Matthew L.; Allen, Kelli D.; Yancy, William S.; Shaffer, Jonathan A.

    2015-01-01

    Background There is little guidance about to how select dose parameter values when designing behavioral interventions. Purpose The purpose of this study is to present approaches to inform intervention duration, frequency, and amount when (1) the investigator has no a priori expectation and is seeking a descriptive approach for identifying and narrowing the universe of dose values or (2) the investigator has an a priori expectation and is seeking validation of this expectation using an inferential approach. Methods Strengths and weaknesses of various approaches are described and illustrated with examples. Results Descriptive approaches include retrospective analysis of data from randomized trials, assessment of perceived optimal dose via prospective surveys or interviews of key stakeholders, and assessment of target patient behavior via prospective, longitudinal, observational studies. Inferential approaches include nonrandomized, early-phase trials and randomized designs. Conclusions By utilizing these approaches, researchers may more efficiently apply resources to identify the optimal values of dose parameters for behavioral interventions. PMID:24722964

  13. A hybrid simulation-optimization approach for solving the areal groundwater pollution source identification problems

    NASA Astrophysics Data System (ADS)

    Ayvaz, M. Tamer

    2016-07-01

    In this study, a new simulation-optimization approach is proposed for solving the areal groundwater pollution source identification problems which is an ill-posed inverse problem. In the simulation part of the proposed approach, groundwater flow and pollution transport processes are simulated by modeling the given aquifer system on MODFLOW and MT3DMS models. The developed simulation model is then integrated to a newly proposed hybrid optimization model where a binary genetic algorithm and a generalized reduced gradient method are mutually used. This is a novel approach and it is employed for the first time in the areal pollution source identification problems. The objective of the proposed hybrid optimization approach is to simultaneously identify the spatial distributions and input concentrations of the unknown areal groundwater pollution sources by using the limited number of pollution concentration time series at the monitoring well locations. The applicability of the proposed simulation-optimization approach is evaluated on a hypothetical aquifer model for different pollution source distributions. Furthermore, model performance is evaluated for measurement error conditions, different genetic algorithm parameter combinations, different numbers and locations of the monitoring wells, and different heterogeneous hydraulic conductivity fields. Identified results indicated that the proposed simulation-optimization approach may be an effective way to solve the areal groundwater pollution source identification problems.

  14. Optimality approaches to describe characteristic fluvial patterns on landscapes

    PubMed Central

    Paik, Kyungrock; Kumar, Praveen

    2010-01-01

    Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257

  15. A new optimization based approach to experimental combination chemotherapy.

    PubMed

    Pereira, F L; Pedreira, C E; de Sousa, J B

    1995-01-01

    A new approach towards the design of optimal multiple drug experimental cancer chemotherapy is presented. Once an adequate model is specified, an optimization procedure is used in order to achieve an optimal compromise between after treatment tumor size and toxic effects on healthy tissues. In our approach we consider a model including cancer cell population growth and pharmacokinetic dynamics. These elements of the model are essential in order to allow less empirical relationships between multiple drug delivery policies, and their effects on cancer and normal cells. The desired multiple drug dosage schedule is computed by minimizing a customizable cost function subject to dynamic constraints expressed by the model. However, this additional dynamic wealth increases the complexity of the problem which, in general, cannot be solved in a closed form. Therefore, we propose an iterative optimization algorithm of the projected gradient type where the Maximum Principle of Pontryagin is used to select the optimal control policy.

  16. Optimization of coupled systems: A critical overview of approaches

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Sobieszczanski-Sobieski, J.

    1994-01-01

    A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.

  17. Annular flow optimization: A new integrated approach

    SciTech Connect

    Maglione, R.; Robotti, G.; Romagnoli, R.

    1997-07-01

    During the drilling stage of an oil and gas well the hydraulic circuit of the mud assumes great importance with respect to most of the numerous and various constituting parts (mostly in the annular sections). Each of them has some points to be satisfied in order to guarantee both the safety of the operations and the performance optimization of each of the single elements of the circuit. The most important tasks for the annular part of the drilling hydraulic circuit are the following: (1) Maximum available pressure to the last casing shoe; (2) avoid borehole wall erosions; and (3) guarantee the hole cleaning. A new integrated system considering all the elements of the annular part of the drilling hydraulic circuit and the constraints imposed from each of them has been realized. In this way the family of the flow parameters (mud rheology and pump rate) satisfying simultaneously all the variables of the annular section has been found. Finally two examples regarding a standard and narrow annular section (slim hole) will be reported, showing briefly all the steps of the calculations until reaching the optimum flow parameters family (for that operational condition of drilling) that satisfies simultaneous all the flow parameters limitations imposed by the elements of the annular section circuit.

  18. Comparative Properties of Collaborative Optimization and other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  19. Comparative Properties of Collaborative Optimization and Other Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    1999-01-01

    We, discuss criteria by which one can classify, analyze, and evaluate approaches to solving multidisciplinary design optimization (MDO) problems. Central to our discussion is the often overlooked distinction between questions of formulating MDO problems and solving the resulting computational problem. We illustrate our general remarks by comparing several approaches to MDO that have been proposed.

  20. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.

    PubMed

    Yan, Zheng; Wang, Jun; Li, Guocheng

    2014-07-01

    This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions. PMID:24705545

  1. Imaging approaches to optimize molecular therapies.

    PubMed

    Weissleder, Ralph; Schwaiger, Markus C; Gambhir, Sanjiv Sam; Hricak, Hedvig

    2016-09-01

    Imaging, including its use for innovative tissue sampling, is slowly being recognized as playing a pivotal role in drug development, clinical trial design, and more effective delivery and monitoring of molecular therapies. The challenge is that, while a considerable number of new imaging technologies and new targeted tracers have been developed for cancer imaging in recent years, the technologies are neither evenly distributed nor evenly implemented. Furthermore, many imaging innovations are not validated and are not ready for widespread use in drug development or in clinical trial designs. Inconsistent and often erroneous use of terminology related to quantitative imaging biomarkers has also played a role in slowing their development and implementation. We examine opportunities for, and challenges of, the use of imaging biomarkers to facilitate development of molecular therapies and to accelerate progress in clinical trial design. In the future, in vivo molecular imaging, image-guided tissue sampling for mutational analyses ("high-content biopsies"), and noninvasive in vitro tests ("liquid biopsies") will likely be used in various combinations to provide the best possible monitoring and individualized treatment plans for cancer patients. PMID:27605550

  2. Optimal Voltage Regulation for Unbalanced Distribution Networks Considering Distributed Energy Resources

    SciTech Connect

    Xu, Yan; Tomsovic, Kevin

    2015-01-01

    With increasing penetration of distributed generation in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative quadratic constrained quadratic programming model to minimize voltage deviations and maximize distributed energy resource (DER) active power output in a three phase unbalanced distribution system is developed. The optimization model is based on the linearized sensitivity coefficients between controlled variables (e.g., node voltages) and control variables (e.g., real and reactive power injections of DERs). To avoid the oscillation of solution when it is close to the optimum, a golden search method is introduced to control the step size. Numerical simulations on modified IEEE 13 nodes test feeders show the efficiency of the proposed model. Compared to the results solved by heuristic search (harmony algorithm), the proposed model converges quickly to the global optimum.

  3. A Communication-Optimal Framework for Contracting Distributed Tensors

    SciTech Connect

    Rajbhandari, Samyam; NIkam, Akshay; Lai, Pai-Wei; Stock, Kevin; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-11-16

    Tensor contractions are extremely compute intensive generalized matrix multiplication operations encountered in many computational science fields, such as quantum chemistry and nuclear physics. Unlike distributed matrix multiplication, which has been extensively studied, limited work has been done in understanding distributed tensor contractions. In this paper, we characterize distributed tensor contraction algorithms on torus networks. We develop a framework with three fundamental communication operators to generate communication-efficient contraction algorithms for arbitrary tensor contractions. We show that for a given amount of memory per processor, our framework is communication optimal for all tensor contractions. We demonstrate performance and scalability of our framework on up to 262,144 cores of BG/Q supercomputer using five tensor contraction examples.

  4. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  5. Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-03-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method) with a multiobjective optimization (MOO) approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO) with the traditional Nelder-Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1) the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2) both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash-Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3) evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4) compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization

  6. Principled negotiation and distributed optimization for advanced air traffic management

    NASA Astrophysics Data System (ADS)

    Wangermann, John Paul

    Today's aircraft/airspace system faces complex challenges. Congestion and delays are widespread as air traffic continues to grow. Airlines want to better optimize their operations, and general aviation wants easier access to the system. Additionally, the accident rate must decline just to keep the number of accidents each year constant. New technology provides an opportunity to rethink the air traffic management process. Faster computers, new sensors, and high-bandwidth communications can be used to create new operating models. The choice is no longer between "inflexible" strategic separation assurance and "flexible" tactical conflict resolution. With suitable operating procedures, it is possible to have strategic, four-dimensional separation assurance that is flexible and allows system users maximum freedom to optimize operations. This thesis describes an operating model based on principled negotiation between agents. Many multi-agent systems have agents that have different, competing interests but have a shared interest in coordinating their actions. Principled negotiation is a method of finding agreement between agents with different interests. By focusing on fundamental interests and searching for options for mutual gain, agents with different interests reach agreements that provide benefits for both sides. Using principled negotiation, distributed optimization by each agent can be coordinated leading to iterative optimization of the system. Principled negotiation is well-suited to aircraft/airspace systems. It allows aircraft and operators to propose changes to air traffic control. Air traffic managers check the proposal maintains required aircraft separation. If it does, the proposal is either accepted or passed to agents whose trajectories change as part of the proposal for approval. Aircraft and operators can use all the data at hand to develop proposals that optimize their operations, while traffic managers can focus on their primary duty of ensuring

  7. Smooth finite-dimensional approximations of distributed optimization problems via control discretization

    NASA Astrophysics Data System (ADS)

    Chernov, A. V.

    2013-12-01

    Approximating finite-dimensional mathematical programming problems are studied that arise from piecewise constant discretization of controls in the optimization of distributed systems of a fairly broad class. The smoothness of the approximating problems is established. Gradient formulas are derived that make use of the analytical solution of the original control system and its adjoint, thus providing an opportunity for algorithmic separation of numerical optimization and the task of solving a controlled initial-boundary value problem. The approximating problems are proved to converge to the original optimization problem with respect to the functional as the discretization is refined. The application of the approach to optimization problems is illustrated by solving the semilinear wave equation controlled by applying an integral criterion. The results of numerical experiments are analyzed.

  8. Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals

    PubMed Central

    2016-01-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081

  9. Departures from optimality when pursuing multiple approach or avoidance goals.

    PubMed

    Ballard, Timothy; Yeo, Gillian; Neal, Andrew; Farrell, Simon

    2016-07-01

    This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. (PsycINFO Database Record PMID:26963081

  10. Distribution Matching with the Bhattacharyya Similarity: A Bound Optimization Framework.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Shuo Li

    2015-09-01

    We present efficient graph cut algorithms for three problems: (1) finding a region in an image, so that the histogram (or distribution) of an image feature within the region most closely matches a given model; (2) co-segmentation of image pairs and (3) interactive image segmentation with a user-provided bounding box. Each algorithm seeks the optimum of a global cost function based on the Bhattacharyya measure, a convenient alternative to other matching measures such as the Kullback-Leibler divergence. Our functionals are not directly amenable to graph cut optimization as they contain non-linear functions of fractional terms, which make the ensuing optimization problems challenging. We first derive a family of parametric bounds of the Bhattacharyya measure by introducing an auxiliary labeling. Then, we show that these bounds are auxiliary functions of the Bhattacharyya measure, a result which allows us to solve each problem efficiently via graph cuts. We show that the proposed optimization procedures converge within very few graph cut iterations. Comprehensive and various experiments, including quantitative and comparative evaluations over two databases, demonstrate the advantages of the proposed algorithms over related works in regard to optimality, computational load, accuracy and flexibility.

  11. Distributed computer system enhances productivity for SRB joint optimization

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1987-01-01

    Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.

  12. A Riccati approach for constrained linear quadratic optimal control

    NASA Astrophysics Data System (ADS)

    Sideris, Athanasios; Rodriguez, Luis A.

    2011-02-01

    An active-set method is proposed for solving linear quadratic optimal control problems subject to general linear inequality path constraints including mixed state-control and state-only constraints. A Riccati-based approach is developed for efficiently solving the equality constrained optimal control subproblems generated during the procedure. The solution of each subproblem requires computations that scale linearly with the horizon length. The algorithm is illustrated with numerical examples.

  13. Optimized distributed computing environment for mask data preparation

    NASA Astrophysics Data System (ADS)

    Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung

    2005-11-01

    As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.

  14. Optimal purchasing of raw materials: A data-driven approach

    SciTech Connect

    Muteki, K.; MacGregor, J.F.

    2008-06-15

    An approach to the optimal purchasing of raw materials that will achieve a desired product quality at a minimum cost is presented. A PLS (Partial Least Squares) approach to formulation modeling is used to combine databases on raw material properties and on past process operations and to relate these to final product quality. These PLS latent variable models are then used in a sequential quadratic programming (SQP) or mixed integer nonlinear programming (MINLP) optimization to select those raw-materials, among all those available on the market, the ratios in which to combine them and the process conditions under which they should be processed. The approach is illustrated for the optimal purchasing of metallurgical coals for coke making in the steel industry.

  15. Mathematical optimization of matter distribution for a planetary system configuration

    NASA Astrophysics Data System (ADS)

    Morozov, Yegor; Bukhtoyarov, Mikhail

    2016-07-01

    Planetary formation is mostly a random process. When the humanity reaches the point when it can transform planetary systems for the purpose of interstellar life expansion, the optimal distribution of matter in a planetary system will determine its population and expansive potential. Maximization of the planetary system carrying capacity and its potential for the interstellar life expansion depends on planetary sizes, orbits, rotation, chemical composition and other vital parameters. The distribution of planetesimals to achieve maximal carrying capacity of the planets during their life cycle, and maximal potential to inhabit other planetary systems must be calculated comprehensively. Moving much material from one planetary system to another is uneconomic because of the high amounts of energy and time required. Terraforming of the particular planets before the whole planetary system is configured might drastically decrease the potential habitability the whole system. Thus a planetary system is the basic unit for calculations to sustain maximal overall population and expand further. The mathematical model of optimization of matter distribution for a planetary system configuration includes the input observed parameters: the map of material orbiting in the planetary system with specified orbits, masses, sizes, and the chemical compound for each, and the optimized output parameters. The optimized output parameters are sizes, masses, the number of planets, their chemical compound, and masses of the satellites required to make tidal forces. Also the magnetic fields and planetary rotations are crucial, but they will be considered in further versions of this model. The optimization criteria is the maximal carrying capacity plus maximal expansive potential of the planetary system. The maximal carrying capacity means the availability of essential life ingredients on the planetary surface, and the maximal expansive potential means availability of uranium and metals to build

  16. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  17. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  18. A hybrid approach to near-optimal launch vehicle guidance

    NASA Technical Reports Server (NTRS)

    Leung, Martin S. K.; Calise, Anthony J.

    1992-01-01

    This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.

  19. Optimal Mass Distribution Prediction for Human Proximal Femur with Bi-modulus Property.

    PubMed

    Shi, Jiao; Cai, Kun; Qin, Qing H

    2014-12-01

    Simulation of the mass distribution in a human proximal femur is important to provide a reasonable therapy scheme for a patient with osteoporosis. An algorithm is developed for prediction of optimal mass distribution in a human proximal femur under a given loading environment. In this algorithm, the bone material is assumed to be bi-modulus, i.e., the tension modulus is not identical to the compression modulus in the same direction. With this bi-modulus bone material, a topology optimization method, i.e., modified SIMP approach, is employed to determine the optimal mass distribution in a proximal femur. The effects of the difference between two moduli on the final material distribution are numerically investigated. Numerical results obtained show that the mass distribution in bi-modular bone materials is different from that in traditional isotropic material. As the tension modulus is less than the compression modulus for bone tissues, the amount of mass required to support tension loads is greater than that required by isotropic material for the same daily activities including one-leg stance, abduction and adduction. PMID:26336694

  20. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  1. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques

    PubMed Central

    2012-01-01

    Background Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations. This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Results Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. Conclusions In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the

  2. An optimal control approach to probabilistic Boolean networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  3. A deterministic global approach for mixed-discrete structural optimization

    NASA Astrophysics Data System (ADS)

    Lin, Ming-Hua; Tsai, Jung-Fa

    2014-07-01

    This study proposes a novel approach for finding the exact global optimum of a mixed-discrete structural optimization problem. Although many approaches have been developed to solve the mixed-discrete structural optimization problem, they cannot guarantee finding a global solution or they adopt too many extra binary variables and constraints in reformulating the problem. The proposed deterministic method uses convexification strategies and linearization techniques to convert a structural optimization problem into a convex mixed-integer nonlinear programming problem solvable to obtain a global optimum. To enhance the computational efficiency in treating complicated problems, the range reduction technique is also applied to tighten variable bounds. Several numerical experiments drawn from practical structural design problems are presented to demonstrate the effectiveness of the proposed method.

  4. The optimality of potential rescaling approaches in land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    It is well-known that systematic differences exist between modeled and observed realizations of hydrological variables like soil moisture. Prior to data assimilation, these differences must be removed in order to obtain an optimal analysis. A number of rescaling approaches have been proposed for rem...

  5. Successive linear optimization approach to the dynamic traffic assignment problem

    SciTech Connect

    Ho, J.K.

    1980-11-01

    A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.

  6. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  7. Optimal Estimation Retrieval of Cloud Ice Particle Size Distributions

    NASA Astrophysics Data System (ADS)

    Griffith, B. D.; Kummerow, C.

    2006-12-01

    An optimal estimation retrieval technique has been applied to a multi-frequency airborne radar and radiometer data set from the Wakasa Bay AMSR-E validation experiment. First, airborne radar observations at 13.4, 35.6 and 94.9 GHz were integrated to retrieve all three parameters of a normalized gamma ice particle size distribution (PSD). The retrieved PSD was validated against near-simultaneous in situ cloud probe observations. The differences between the retrieved and in situ measured PSDs were explored through sensitivity analysis, and the sources of uncertainty were found to be the bulk density of the cloud ice and the aspect ratio of aspherical particles modeled as oblate spheroids. The optimal estimation technique was then applied to select an optimal density and aspect ratio for the cloud under study through integration of the in situ and radar observations. The optimal ice size-density relationship was found to be ρ(D)=0.07×D^{- 1.58} g cm-3 where the diameter, D, is in mm, and the oblate spheroid aspect ratio was found to be 0.53. The use of these optimal values, as improved assumptions in the PSD retrieval, reduced the uncertainty in the optimized forward model from ± 6 dB to ± 2 dB. Next, the retrieval technique was expanded to include passive microwave observations and retrieve a full column vertical hydrometeor profile. Eleven airborne passive microwave frequencies from 10.7 to 340 GHz were integrated with the airborne radar observations to retrieve all three parameters of a normalized gamma PSD at each vertical level in the column. The retrieved vertical profile was validated against a clear sky scene before being applied to the horizontal extent of an ice cloud. The retrieved PSD showed vertical structure consistent with cloud microphysical processes. PSDs were retrieved using both the general and improved assumption case-specific density and shape models. A comparison revealed an order of magnitude difference in ice water path between the two

  8. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  9. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    SciTech Connect

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-15

    Purpose: In radiation therapy treatment planning, the clinical objectives of uniform high dose to the planning target volume (PTV) and low dose to the organs-at-risk (OARs) are invariably in conflict, often requiring compromises to be made between them when selecting the best treatment plan for a particular patient. In this work, the authors introduce Pareto-Aware Radiotherapy Evolutionary Treatment Optimization (pareto), a multiobjective optimization tool to solve for beam angles and fluence patterns in intensity-modulated radiation therapy (IMRT) treatment planning. Methods: pareto is built around a powerful multiobjective genetic algorithm (GA), which allows us to treat the problem of IMRT treatment plan optimization as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. We have employed a simple parameterized beam fluence representation with a realistic dose calculation approach, incorporating patient scatter effects, to demonstrate feasibility of the proposed approach on two phantoms. The first phantom is a simple cylindrical phantom containing a target surrounded by three OARs, while the second phantom is more complex and represents a paraspinal patient. Results: pareto results in a large database of Pareto nondominated solutions that represent the necessary trade-offs between objectives. The solution quality was examined for several PTV and OAR fitness functions. The combination of a conformity-based PTV fitness function and a dose-volume histogram (DVH) or equivalent uniform dose (EUD) -based fitness function for the OAR produced relatively uniform and conformal PTV doses, with well-spaced beams. A penalty function added to the fitness functions eliminates hotspots. Comparison of resulting DVHs to those from treatment plans developed with a single-objective fluence optimizer (from a commercial treatment planning system) showed good correlation. Results also indicated that pareto shows

  10. Optimization of water distribution systems in high-rise buildings

    SciTech Connect

    Loh, Han Tong; Chew, T.C.

    1994-12-31

    The scarcity of land in Singapore has led to a rapid escalation of land prices in recent years. This has resulted in developers building taller and taller buildings in order to maximize their return on building projects. Due to the height involved, the water distribution system in such buildings is a multi-stage one. Hence the problem of deciding the number of stages and the location of each stage arises. In this paper, we will describe the design decisions to be taken in the preliminary design of a multi-stage water distribution system in a high-rise building and pose it as an optimization problem to minimize the overall cost of implementation. The variable costcomponents are the cost of pumps, the floor space cost and the operational cost of the water distribution system. We will describe a study on a 66-story building and highlight the major findings. Interesting results are observed when the cost components are taken one at a time. The strategy for finding the optimum or near optimum for other high-rise buildings will be discussed.

  11. An optimal torque distribution control strategy for four-independent wheel drive electric vehicles

    NASA Astrophysics Data System (ADS)

    Li, Bin; Goodarzi, Avesta; Khajepour, Amir; Chen, Shih-ken; Litkouhi, Baktiar

    2015-08-01

    In this paper, an optimal torque distribution approach is proposed for electric vehicle equipped with four independent wheel motors to improve vehicle handling and stability performance. A novel objective function is formulated which works in a multifunctional way by considering the interference among different performance indices: forces and moment errors at the centre of gravity of the vehicle, actuator control efforts and tyre workload usage. To adapt different driving conditions, a weighting factors tuning scheme is designed to adjust the relative weight of each performance in the objective function. The effectiveness of the proposed optimal torque distribution is evaluated by simulations with CarSim and Matlab/Simulink. The simulation results under different driving scenarios indicate that the proposed control strategy can effectively improve the vehicle handling and stability even in slippery road conditions.

  12. A robust optimization model for distribution and evacuation in the disaster response phase

    NASA Astrophysics Data System (ADS)

    Fereiduni, Meysam; Shahanaghi, Kamran

    2016-10-01

    Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.

  13. Analytic characterization of linear accelerator radiosurgery dose distributions for fast optimization.

    PubMed

    Meeks, S L; Bova, F J; Buatti, J M; Friedman, W A; Eyster, B; Kendrick, L A

    1999-11-01

    Linear accelerator (linac) radiosurgery utilizes non-coplanar arc therapy delivered through circular collimators. Generally, spherically symmetric arc sets are used, resulting in nominally spherical dose distributions. Various treatment planning parameters may be manipulated to provide dose conformation to irregular lesions. Iterative manipulation of these variables can be a difficult and time-consuming task, because (a) understanding the effect of these parameters is complicated and (b) three-dimensional (3D) dose calculations are computationally expensive. This manipulation can be simplified, however, because the prescription isodose surface for all single isocentre distributions can be approximated by conic sections. In this study, the effects of treatment planning parameter manipulation on the dimensions of the treatment isodose surface were determined empirically. These dimensions were then fitted to analytic functions, assuming that the dose distributions were characterized as conic sections. These analytic functions allowed real-time approximation of the 3D isodose surface. Iterative plan optimization, either manual or automated, is achieved more efficiently using this real time approximation of the dose matrix. Subsequent to iterative plan optimization, the analytic function is related back to the appropriate plan parameters, and the dose distribution is determined using conventional dosimetry calculations. This provides a pseudo-inverse approach to radiosurgery optimization, based solely on geometric considerations.

  14. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    PubMed

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  15. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    PubMed Central

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  16. An approach to the perceptual optimization of complex visualizations.

    PubMed

    House, Donald H; Bair, Alethea S; Ware, Colin

    2006-01-01

    This paper proposes a new experimental framework within which evidence regarding the perceptual characteristics of a visualization method can be collected, and describes how this evidence can be explored to discover principles and insights to guide the design of perceptually near-optimal visualizations. We make the case that each of the current approaches for evaluating visualizations is limited in what it can tell us about optimal tuning and visual design. We go on to argue that our new approach is better suited to optimizing the kinds of complex visual displays that are commonly created in visualization. Our method uses human-in-the-loop experiments to selectively search through the parameter space of a visualization method, generating large databases of rated visualization solutions. Data mining is then used to extract results from the database, ranging from highly specific exemplar visualizations for a particular data set, to more broadly applicable guidelines for visualization design. We illustrate our approach using a recent study of optimal texturing for layered surfaces viewed in stereo and in motion. We show that a genetic algorithm is a valuable way of guiding the human-in-the-loop search through visualization parameter space. We also demonstrate several useful data mining methods including clustering, principal component analysis, neural networks, and statistical comparisons of functions of parameters.

  17. Effects of optimism on creativity under approach and avoidance motivation

    PubMed Central

    Icekson, Tamar; Roskes, Marieke; Moran, Simone

    2014-01-01

    Focusing on avoiding failure or negative outcomes (avoidance motivation) can undermine creativity, due to cognitive (e.g., threat appraisals), affective (e.g., anxiety), and volitional processes (e.g., low intrinsic motivation). This can be problematic for people who are avoidance motivated by nature and in situations in which threats or potential losses are salient. Here, we review the relation between avoidance motivation and creativity, and the processes underlying this relation. We highlight the role of optimism as a potential remedy for the creativity undermining effects of avoidance motivation, due to its impact on the underlying processes. Optimism, expecting to succeed in achieving success or avoiding failure, may reduce negative effects of avoidance motivation, as it eases threat appraisals, anxiety, and disengagement—barriers playing a key role in undermining creativity. People experience these barriers more under avoidance than under approach motivation, and beneficial effects of optimism should therefore be more pronounced under avoidance than approach motivation. Moreover, due to their eagerness, approach motivated people may even be more prone to unrealistic over-optimism and its negative consequences. PMID:24616690

  18. Correction of linear-array lidar intensity data using an optimal beam shaping approach

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing; Yang, Xingyu; Zhang, Bingqing; Li, Fenfang

    2016-08-01

    The linear-array lidar has been recently developed and applied for its superiority of vertically non-scanning, large field of view, high sensitivity and high precision. The beam shaper is the key component for the linear-array detection. However, the traditional beam shaping approaches can hardly satisfy our requirement for obtaining unbiased and complete backscattered intensity data. The required beam distribution should roughly be oblate U-shaped rather than Gaussian or uniform. Thus, an optimal beam shaping approach is proposed in this paper. By employing a pair of conical lenses and a cylindrical lens behind the beam expander, the expanded Gaussian laser was shaped to a line-shaped beam whose intensity distribution is more consistent with the required distribution. To provide a better fit to the requirement, off-axis method is adopted. The design of the optimal beam shaping module is mathematically explained and the experimental verification of the module performance is also presented in this paper. The experimental results indicate that the optimal beam shaping approach can effectively correct the intensity image and provide ~30% gain of detection area over traditional approach, thus improving the imaging quality of linear-array lidar.

  19. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  20. Shape Optimization and Supremal Minimization Approaches in Landslides Modeling

    SciTech Connect

    Hassani, Riad Ionescu, Ioan R. Lachand-Robert, Thomas

    2005-10-15

    The steady-state unidirectional (anti-plane) flow for a Bingham fluid is considered. We take into account the inhomogeneous yield limit of the fluid, which is well adjusted to the description of landslides. The blocking property is analyzed and we introduce the safety factor which is connected to two optimization problems in terms of velocities and stresses. Concerning the velocity analysis the minimum problem in Bv({omega}) is equivalent to a shape-optimization problem. The optimal set is the part of the land which slides whenever the loading parameter becomes greater than the safety factor. This is proved in the one-dimensional case and conjectured for the two-dimensional flow. For the stress-optimization problem we give a stream function formulation in order to deduce a minimum problem in W{sup 1,{infinity}}({omega}) and we prove the existence of a minimizer. The L{sup p}({omega}) approximation technique is used to get a sequence of minimum problems for smooth functionals. We propose two numerical approaches following the two analysis presented before.First, we describe a numerical method to compute the safety factor through equivalence with the shape-optimization problem.Then the finite-element approach and a Newton method is used to obtain a numerical scheme for the stress formulation. Some numerical results are given in order to compare the two methods. The shape-optimization method is sharp in detecting the sliding zones but the convergence is very sensitive to the choice of the parameters. The stress-optimization method is more robust, gives precise safety factors but the results cannot be easily compiled to obtain the sliding zone.

  1. Adaptive Wing Camber Optimization: A Periodic Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Espana, Martin; Gilyard, Glenn

    1994-01-01

    Available redundancy among aircraft control surfaces allows for effective wing camber modifications. As shown in the past, this fact can be used to improve aircraft performance. To date, however, algorithm developments for in-flight camber optimization have been limited. This paper presents a perturbational approach for cruise optimization through in-flight camber adaptation. The method uses, as a performance index, an indirect measurement of the instantaneous net thrust. As such, the actual performance improvement comes from the integrated effects of airframe and engine. The algorithm, whose design and robustness properties are discussed, is demonstrated on the NASA Dryden B-720 flight simulator.

  2. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  3. Multiobjective genetic approach for optimal control of photoinduced processes

    SciTech Connect

    Bonacina, Luigi; Extermann, Jerome; Rondi, Ariana; Wolf, Jean-Pierre; Boutou, Veronique

    2007-08-15

    We have applied a multiobjective genetic algorithm to the optimization of multiphoton-excited fluorescence. Our study shows the advantages that this approach can offer to experiments based on adaptive shaping of femtosecond pulses. The algorithm outperforms single-objective optimizations, being totally independent from the bias of user defined parameters and giving simultaneous access to a large set of feasible solutions. The global inspection of their ensemble represents a powerful support to unravel the connections between pulse spectral field features and excitation dynamics of the sample.

  4. Multiobjective evolutionary optimization of water distribution systems: Exploiting diversity with infeasible solutions.

    PubMed

    Tanyimboh, Tiku T; Seyoum, Alemtsehay G

    2016-12-01

    This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. PMID:27589918

  5. Optimization of pelvic heating rate distributions with electromagnetic phased arrays.

    PubMed

    Paulsen, K D; Geimer, S; Tang, J; Boyse, W E

    1999-01-01

    Deep heating of pelvic tumours with electromagnetic phased arrays has recently been reported to improve local tumour control when combined with radiotherapy in a randomized clinical trial despite the fact that rather modest elevations in tumour temperatures were achieved. It is reasonable to surmise that improvements in temperature elevation could lead to even better tumour response rates, motivating studies which attempt to explore the parameter space associated with heating rate delivery in the pelvis. Computational models which are based on detailed three-dimensional patient anatomy are readily available and lend themselves to this type of investigation. In this paper, volume average SAR is optimized in a predefined target volume subject to a maximum allowable volume average SAR outside this zone. Variables under study include the position of the target zone, the number and distribution of radiators and the applicator operating frequency. The results show a clear preference for increasing frequency beyond 100 MHz, which is typically applied clinically, especially as the number of antennae increases. Increasing both the number of antennae per circumferential distance around the patient, as well as the number of independently functioning antenna bands along the patient length, is important in this regard, although improvements were found to be more significant with increasing circumferential antenna density. However, there is considerable site specific variation and cases occur where lower numbers of antennae spread out over multiple longitudinal bands are more advantageous. The results presented here have been normalized relative to an optimized set of antenna array amplitudes and phases operating at 100 MHz which is a common clinical configuration. The intent is to provide some indications of avenues for improving the heating rate distributions achievable with current technology.

  6. Aftershock Energy Distribution by Statistical Mechanics Approach

    NASA Astrophysics Data System (ADS)

    Daminelli, R.; Marcellini, A.

    2015-12-01

    The aim of our work is to research the most probable distribution of the energy of aftershocks. We started by applying one of the fundamental principles of statistical mechanics that, in case of aftershock sequences, it could be expressed as: the greater the number of different ways in which the energy of aftershocks can be arranged among the energy cells in phase space the more probable the distribution. We assume that each cell in phase space has the same possibility to be occupied, and that more than one cell in the phase space can have the same energy. Seeing that seismic energy is proportional to products of different parameters, a number of different combinations of parameters can produce different energies (e.g., different combination of stress drop and fault area can release the same seismic energy). Let us assume that there are gi cells in the aftershock phase space characterised by the same energy released ɛi. Therefore we can assume that the Maxwell-Boltzmann statistics can be applied to aftershock sequences with the proviso that the judgment on the validity of this hypothesis is the agreement with the data. The aftershock energy distribution can therefore be written as follow: n(ɛ)=Ag(ɛ)exp(-βɛ)where n(ɛ) is the number of aftershocks with energy, ɛ, A and β are constants. Considering the above hypothesis, we can assume g(ɛ) is proportional to ɛ. We selected and analysed different aftershock sequences (data extracted from Earthquake Catalogs of SCEC, of INGV-CNT and other institutions) with a minimum magnitude retained ML=2 (in some cases ML=2.6) and a time window of 35 days. The results of our model are in agreement with the data, except in the very low energy band, where our model resulted in a moderate overestimation.

  7. Optimal Control of Distributed Energy Resources using Model Predictive Control

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen

    2012-07-22

    In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.

  8. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces.

  9. A split-optimization approach for obtaining multiple solutions in single-objective process parameter optimization.

    PubMed

    Rajora, Manik; Zou, Pan; Yang, Yao Guang; Fan, Zhi Wen; Chen, Hung Yi; Wu, Wen Chieh; Li, Beizhi; Liang, Steven Y

    2016-01-01

    It can be observed from the experimental data of different processes that different process parameter combinations can lead to the same performance indicators, but during the optimization of process parameters, using current techniques, only one of these combinations can be found when a given objective function is specified. The combination of process parameters obtained after optimization may not always be applicable in actual production or may lead to undesired experimental conditions. In this paper, a split-optimization approach is proposed for obtaining multiple solutions in a single-objective process parameter optimization problem. This is accomplished by splitting the original search space into smaller sub-search spaces and using GA in each sub-search space to optimize the process parameters. Two different methods, i.e., cluster centers and hill and valley splitting strategy, were used to split the original search space, and their efficiency was measured against a method in which the original search space is split into equal smaller sub-search spaces. The proposed approach was used to obtain multiple optimal process parameter combinations for electrochemical micro-machining. The result obtained from the case study showed that the cluster centers and hill and valley splitting strategies were more efficient in splitting the original search space than the method in which the original search space is divided into smaller equal sub-search spaces. PMID:27625978

  10. A common distributed language approach to software integration

    NASA Technical Reports Server (NTRS)

    Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.

    1989-01-01

    An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.

  11. Optimal dynamic control of invasions: applying a systematic conservation approach.

    PubMed

    Adams, Vanessa M; Setterfield, Samantha A

    2015-06-01

    The social, economic, and environmental impacts of invasive plants are well recognized. However, these variable impacts are rarely accounted for in the spatial prioritization of funding for weed management. We examine how current spatially explicit prioritization methods can be extended to identify optimal budget allocations to both eradication and control measures of invasive species to minimize the costs and likelihood of invasion. Our framework extends recent approaches to systematic prioritization of weed management to account for multiple values that are threatened by weed invasions with a multi-year dynamic prioritization approach. We apply our method to the northern portion of the Daly catchment in the Northern Territory, which has significant conservation values that are threatened by gamba grass (Andropogon gayanus), a highly invasive species recognized by the Australian government as a Weed of National Significance (WONS). We interface Marxan, a widely applied conservation planning tool, with a dynamic biophysical model of gamba grass to optimally allocate funds to eradication and control programs under two budget scenarios comparing maximizing gain (MaxGain) and minimizing loss (MinLoss) optimization approaches. The prioritizations support previous findings that a MinLoss approach is a better strategy when threats are more spatially variable than conservation values. Over a 10-year simulation period, we find that a MinLoss approach reduces future infestations by ~8% compared to MaxGain in the constrained budget scenarios and ~12% in the unlimited budget scenarios. We find that due to the extensive current invasion and rapid rate of spread, allocating the annual budget to control efforts is more efficient than funding eradication efforts when there is a constrained budget. Under a constrained budget, applying the most efficient optimization scenario (control, minloss) reduces spread by ~27% compared to no control. Conversely, if the budget is unlimited it

  12. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  13. Activity-Centric Approach to Distributed Programming

    NASA Technical Reports Server (NTRS)

    Levy, Renato; Satapathy, Goutam; Lang, Jun

    2004-01-01

    The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.

  14. Optimal vibration control of curved beams using distributed parameter models

    NASA Astrophysics Data System (ADS)

    Liu, Fushou; Jin, Dongping; Wen, Hao

    2016-12-01

    The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.

  15. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  16. Accounting for the tongue-and-groove effect using a robust direct aperture optimization approach

    SciTech Connect

    Salari, Ehsan; Men Chunhua; Romeijn, H. Edwin

    2011-03-15

    Purpose: Traditionally, the tongue-and-groove effect due to the multileaf collimator architecture in intensity-modulated radiation therapy (IMRT) has typically been deferred to the leaf sequencing stage. The authors propose a new direct aperture optimization method for IMRT treatment planning that explicitly incorporates dose calculation inaccuracies due to the tongue-and-groove effect into the treatment plan optimization stage. Methods: The authors avoid having to accurately estimate the dosimetric effects of the tongue-and-groove architecture by using lower and upper bounds on the dose distribution delivered to the patient. They then develop a model that yields a treatment plan that is robust with respect to the corresponding dose calculation inaccuracies. Results: Tests on a set of ten clinical head-and-neck cancer cases demonstrate the effectiveness of the new method in developing robust treatment plans with tight dose distributions in targets and critical structures. This is contrasted with the very loose bounds on the dose distribution that are obtained by solving a traditional treatment plan optimization model that ignores tongue-and-groove effects in the treatment planning stage. Conclusions: A robust direct aperture optimization approach is proposed to account for the dosimetric inaccuracies caused by the tongue-and-groove effect. The experiments validate the ability of the proposed approach in designing robust treatment plans regardless of the exact consequences of the tongue-and-groove architecture.

  17. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.

    PubMed

    Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A

    2013-02-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  18. Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach

    PubMed Central

    Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.

    2014-01-01

    Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856

  19. An evolutionary based Bayesian design optimization approach under incomplete information

    NASA Astrophysics Data System (ADS)

    Srivastava, Rupesh; Deb, Kalyanmoy

    2013-02-01

    Design optimization in the absence of complete information about uncertain quantities has been recently gaining consideration, as expensive repetitive computation tasks are becoming tractable due to the invention of faster and parallel computers. This work uses Bayesian inference to quantify design reliability when only sample measurements of the uncertain quantities are available. A generalized Bayesian reliability based design optimization algorithm has been proposed and implemented for numerical as well as engineering design problems. The approach uses an evolutionary algorithm (EA) to obtain a trade-off front between design objectives and reliability. The Bayesian approach provides a well-defined link between the amount of available information and the reliability through a confidence measure, and the EA acts as an efficient optimizer for a discrete and multi-dimensional objective space. Additionally, a GPU-based parallelization study shows computational speed-up of close to 100 times in a simulated scenario wherein the constraint qualification checks may be time consuming and could render a sequential implementation that can be impractical for large sample sets. These results show promise for the use of a parallel implementation of EAs in handling design optimization problems under uncertainties.

  20. A mathematical programming approach to stochastic and dynamic optimization problems

    SciTech Connect

    Bertsimas, D.

    1994-12-31

    We propose three ideas for constructing optimal or near-optimal policies: (1) for systems for which we have an exact characterization of the performance space we outline an adaptive greedy algorithm that gives rise to indexing policies (we illustrate this technique in the context of indexable systems); (2) we use integer programming to construct policies from the underlying descriptions of the performance space (we illustrate this technique in the context of polling systems); (3) we use linear control over polyhedral regions to solve deterministic versions for this class of problems. This approach gives interesting insights for the structure of the optimal policy (we illustrate this idea in the context of multiclass queueing networks). The unifying theme in the paper is the thesis that better formulations lead to deeper understanding and better solution methods. Overall the proposed approach for stochastic and dynamic optimization parallels efforts of the mathematical programming community in the last fifteen years to develop sharper formulations (polyhedral combinatorics and more recently nonlinear relaxations) and leads to new insights ranging from a complete characterization and new algorithms for indexable systems to tight lower bounds and new algorithms with provable a posteriori guarantees for their suboptimality for polling systems, multiclass queueing and loss networks.

  1. Optimized probabilistic quantum processors: A unified geometric approach 1

    NASA Astrophysics Data System (ADS)

    Bergou, Janos; Bagan, Emilio; Feldman, Edgar

    Using probabilistic and deterministic quantum cloning, and quantum state separation as illustrative examples we develop a complete geometric solution for finding their optimal success probabilities. The method is related to the approach that we introduced earlier for the unambiguous discrimination of more than two states. In some cases the method delivers analytical results, in others it leads to intuitive and straightforward numerical solutions. We also present implementations of the schemes based on linear optics employing few-photon interferometry

  2. The GRG approach for large-scale optimization

    SciTech Connect

    Drud, A.

    1994-12-31

    The Generalized Reduced Gradient (GRG) algorithm for general Nonlinear Programming (NLP) has been used successfully for over 25 years. The ideas of the original GRG algorithm have been modified and have absorbed developments in unconstrained optimization, linear programming, sparse matrix techniques, etc. The talk will review the essential aspects of the GRG approach and will discuss current development trends, especially related to very large models. Examples will be based on the CONOPT implementation.

  3. Particle Swarm and Ant Colony Approaches in Multiobjective Optimization

    NASA Astrophysics Data System (ADS)

    Rao, S. S.

    2010-10-01

    The social behavior of groups of birds, ants, insects and fish has been used to develop evolutionary algorithms known as swarm intelligence techniques for solving optimization problems. This work presents the development of strategies for the application of two of the popular swarm intelligence techniques, namely the particle swarm and ant colony methods, for the solution of multiobjective optimization problems. In a multiobjective optimization problem, the objectives exhibit a conflicting nature and hence no design vector can minimize all the objectives simultaneously. The concept of Pareto-optimal solution is used in finding a compromise solution. A modified cooperative game theory approach, in which each objective is associated with a different player, is used in this work. The applicability and computational efficiencies of the proposed techniques are demonstrated through several illustrative examples involving unconstrained and constrained problems with single and multiple objectives and continuous and mixed design variables. The present methodologies are expected to be useful for the solution of a variety of practical continuous and mixed optimization problems involving single or multiple objectives with or without constraints.

  4. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  5. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research.

  6. Computational approaches for microalgal biofuel optimization: a review.

    PubMed

    Koussa, Joseph; Chaiboonchoe, Amphun; Salehi-Ashtiani, Kourosh

    2014-01-01

    The increased demand and consumption of fossil fuels have raised interest in finding renewable energy sources throughout the globe. Much focus has been placed on optimizing microorganisms and primarily microalgae, to efficiently produce compounds that can substitute for fossil fuels. However, the path to achieving economic feasibility is likely to require strain optimization through using available tools and technologies in the fields of systems and synthetic biology. Such approaches invoke a deep understanding of the metabolic networks of the organisms and their genomic and proteomic profiles. The advent of next generation sequencing and other high throughput methods has led to a major increase in availability of biological data. Integration of such disparate data can help define the emergent metabolic system properties, which is of crucial importance in addressing biofuel production optimization. Herein, we review major computational tools and approaches developed and used in order to potentially identify target genes, pathways, and reactions of particular interest to biofuel production in algae. As the use of these tools and approaches has not been fully implemented in algal biofuel research, the aim of this review is to highlight the potential utility of these resources toward their future implementation in algal research. PMID:25309916

  7. Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach

    NASA Astrophysics Data System (ADS)

    Pinto, Rafael S.; Saa, Alberto

    2015-12-01

    A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.

  8. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    PubMed Central

    Jevtić, Aleksandar; Gutiérrez, Álvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce. PMID:22346677

  9. Optimal nitrogen distribution within a leaf canopy under direct and diffuse light.

    PubMed

    Hikosaka, Kouki

    2014-09-01

    Nitrogen distribution within a leaf canopy is an important determinant of canopy carbon gain. Previous theoretical studies have predicted that canopy photosynthesis is maximized when the amount of photosynthetic nitrogen is proportionally allocated to the absorbed light. However, most of such studies used a simple Beer's law for light extinction to calculate optimal distribution, and it is not known whether this holds true when direct and diffuse light are considered together. Here, using an analytical solution and model simulations, optimal nitrogen distribution is shown to be very different between models using Beer's law and direct-diffuse light. The presented results demonstrate that optimal nitrogen distribution under direct-diffuse light is steeper than that under diffuse light only. The whole-canopy carbon gain is considerably increased by optimizing nitrogen distribution compared with that in actual canopies in which nitrogen distribution is not optimized. This suggests that optimization of nitrogen distribution can be an effective target trait for improving plant productivity.

  10. Optimal service distribution in WSN service system subject to data security constraints.

    PubMed

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-08-04

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm.

  11. Optimal Service Distribution in WSN Service System Subject to Data Security Constraints

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-01-01

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346

  12. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness.

  13. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Li, Han-Xiong

    2015-04-01

    Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness. PMID:25794375

  14. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    PubMed

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  15. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios

  16. Conductance Distributions for Empirical Orthogonal Function Analysis and Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Knipp, Delores; McGranaghan, Ryan; Matsuo, Tomoko

    2016-04-01

    We show the first characterizations of the primary modes of ionospheric Hall and Pedersen conductance variability as empirical orthogonal functions (EOFs). These are derived from six satellite years of Defense Meteorological Satellite Program (DMSP) particle data acquired during the rise of solar cycles 22 and 24. The 60 million DMSP spectra were each processed through the Global Airlglow Model. This is the first large-scale analysis of ionospheric conductances completely free of assumption of the incident electron energy spectra. We show that the mean patterns and first four EOFs capture ˜50.1 and 52.9% of the total Pedersen and Hall conductance variabilities, respectively. The mean patterns and first EOFs are consistent with typical diffuse auroral oval structures and quiet time strengthening/weakening of the mean pattern. The second and third EOFs show major disturbance features of magnetosphere-ionosphere (MI) interactions: geomagnetically induced auroral zone expansion in EOF2 and the auroral substorm current wedge in EOF3. The fourth EOFs suggest diminished conductance associated with ionospheric substorm recovery mode. These EOFs are then used in a new optimal interpolation (OI) technique to estimate complete high-latitude ionospheric conductance distributions. The technique combines particle precipitation-based calculations of ionospheric conductances and their errors with a background model and its error covariance (estimated by EOF analysis) to infer complete distributions of the high-latitude ionospheric conductances for a week in late 2011. The OI technique captures: 1) smaller-scaler ionospheric conductance features associated with discrete precipitation and 2) brings ground- and space-based data into closer agreement. We show quantitatively and qualitatively that this new technique provides better ionospheric conductance specification than past statistical models, especially during heightened geomagnetic activity.

  17. Simultaneous optimization of dose distributions and fractionation schemes in particle radiotherapy

    SciTech Connect

    Unkelbach, Jan; Zeng, Chuan; Engelsman, Martijn

    2013-09-15

    Purpose: The paper considers the fractionation problem in intensity modulated proton therapy (IMPT). Conventionally, IMPT fields are optimized independently of the fractionation scheme. In this work, we discuss the simultaneous optimization of fractionation scheme and pencil beam intensities.Methods: This is performed by allowing for distinct pencil beam intensities in each fraction, which are optimized using objective and constraint functions based on biologically equivalent dose (BED). The paper presents a model that mimics an IMPT treatment with a single incident beam direction for which the optimal fractionation scheme can be determined despite the nonconvexity of the BED-based treatment planning problem.Results: For this model, it is shown that a small α/β ratio in the tumor gives rise to a hypofractionated treatment, whereas a large α/β ratio gives rise to hyperfractionation. It is further demonstrated that, for intermediate α/β ratios in the tumor, a nonuniform fractionation scheme emerges, in which it is optimal to deliver different dose distributions in subsequent fractions. The intuitive explanation for this phenomenon is as follows: By varying the dose distribution in the tumor between fractions, the same total BED can be achieved with a lower physical dose. If it is possible to achieve this dose variation in the tumor without varying the dose in the normal tissue (which would have an adverse effect), the reduction in physical dose may lead to a net reduction of the normal tissue BED. For proton therapy, this is indeed possible to some degree because the entrance dose is mostly independent of the range of the proton pencil beam.Conclusions: The paper provides conceptual insight into the interdependence of optimal fractionation schemes and the spatial optimization of dose distributions. It demonstrates the emergence of nonuniform fractionation schemes that arise from the standard BED model when IMPT fields and fractionation scheme are optimized

  18. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  19. Portfolio optimization in enhanced index tracking with goal programming approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  20. Optimal eavesdropping on quantum key distribution without quantum memory

    NASA Astrophysics Data System (ADS)

    Bocquet, Aurélien; Alléaume, Romain; Leverrier, Anthony

    2012-01-01

    We consider the security of the BB84 (Bennett and Brassard 1984 Proc. IEEE Int. Conf. on Computers, Systems, and Signal Processing pp 175-9), six-state (Bruß 1998 Phys. Rev. Lett. http://dx.doi.org/10.1103/PhysRevLett.81.3018) and SARG04 (Scarani et al 2004 Phys. Rev. Lett. http://dx.doi.org/10.1103/PhysRevLett.92.057901) quantum key distribution protocols when the eavesdropper does not have access to a quantum memory. In this case, Eve’s most general strategy is to measure her ancilla with an appropriate positive operator-valued measure designed to take advantage of the post-measurement information that will be released during the sifting phase of the protocol. After an optimization on all the parameters accessible to Eve, our method provides us with new bounds for the security of six-state and SARG04 against a memoryless adversary. In particular, for the six-state protocol we show that the maximum quantum bit error ratio for which a secure key can be extracted is increased from 12.6% (for collective attacks) to 20.4% with the memoryless assumption.

  1. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  2. Standardized approach for developing probabilistic exposure factor distributions

    SciTech Connect

    Maddalena, Randy L.; McKone, Thomas E.; Sohn, Michael D.

    2003-03-01

    The effectiveness of a probabilistic risk assessment (PRA) depends critically on the quality of input information that is available to the risk assessor and specifically on the probabilistic exposure factor distributions that are developed and used in the exposure and risk models. Deriving probabilistic distributions for model inputs can be time consuming and subjective. The absence of a standard approach for developing these distributions can result in PRAs that are inconsistent and difficult to review by regulatory agencies. We present an approach that reduces subjectivity in the distribution development process without limiting the flexibility needed to prepare relevant PRAs. The approach requires two steps. First, we analyze data pooled at a population scale to (1) identify the most robust demographic variables within the population for a given exposure factor, (2) partition the population data into subsets based on these variables, and (3) construct archetypal distributions for each subpopulation. Second, we sample from these archetypal distributions according to site- or scenario-specific conditions to simulate exposure factor values and use these values to construct the scenario-specific input distribution. It is envisaged that the archetypal distributions from step 1 will be generally applicable so risk assessors will not have to repeatedly collect and analyze raw data for each new assessment. We demonstrate the approach for two commonly used exposure factors--body weight (BW) and exposure duration (ED)--using data for the U.S. population. For these factors we provide a first set of subpopulation based archetypal distributions along with methodology for using these distributions to construct relevant scenario-specific probabilistic exposure factor distributions.

  3. Robust optimization approach to regional wastewater system planning.

    PubMed

    Zeferino, João A; Cunha, Maria C; Antunes, António P

    2012-10-30

    Wastewater systems are subject to several sources of uncertainty. Different scenarios can occur in the future, depending on the behavior of a variety of demographic, economic, environmental, and technological variables. Robust optimization approaches are aimed at finding solutions that will perform well under any likely scenario. The planning decisions to be made about wastewater system planning involve two main issues: the setup and operation costs of sewer networks, treatment plants, and possible pump stations; and the water quality parameters to be met in the water body where the (treated) wastewater is discharged. The source of uncertainty considered in this article is the flow of the river that receives the wastewater generated in a given region. Three robust optimization models for regional wastewater system planning are proposed. The models are solved using a simulated annealing algorithm enhanced with a local improvement procedure. Their application is illustrated through a case study representing a real-world situation, with the results being compared and commented upon.

  4. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  5. Stochastic real-time optimal control: A pseudospectral approach for bearing-only trajectory optimization

    NASA Astrophysics Data System (ADS)

    Ross, Steven M.

    A method is presented to couple and solve the optimal control and the optimal estimation problems simultaneously, allowing systems with bearing-only sensors to maneuver to obtain observability for relative navigation without unnecessarily detracting from a primary mission. A fundamentally new approach to trajectory optimization and the dual control problem is presented, constraining polynomial approximations of the Fisher Information Matrix to provide an information gradient and allow prescription of the level of future estimation certainty required for mission accomplishment. Disturbances, modeling deficiencies, and corrupted measurements are addressed recursively using Radau pseudospectral collocation methods and sequential quadratic programming for the optimal path and an Unscented Kalman Filter for the target position estimate. The underlying real-time optimal control (RTOC) algorithm is developed, specifically addressing limitations of current techniques that lose error integration. The resulting guidance method can be applied to any bearing-only system, such as submarines using passive sonar, anti-radiation missiles, or small UAVs seeking to land on power lines for energy harvesting. System integration, variable timing methods, and discontinuity management techniques are provided for actual hardware implementation. Validation is accomplished with both simulation and flight test, autonomously landing a quadrotor helicopter on a wire.

  6. A Bayesian optimization approach for wind farm power maximization

    NASA Astrophysics Data System (ADS)

    Park, Jinkyoo; Law, Kincho H.

    2015-03-01

    The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.

  7. Direct and Evolutionary Approaches for Optimal Receiver Function Inversion

    NASA Astrophysics Data System (ADS)

    Dugda, Mulugeta Tuji

    Receiver functions are time series obtained by deconvolving vertical component seismograms from radial component seismograms. Receiver functions represent the impulse response of the earth structure beneath a seismic station. Generally, receiver functions consist of a number of seismic phases related to discontinuities in the crust and upper mantle. The relative arrival times of these phases are correlated with the locations of discontinuities as well as the media of seismic wave propagation. The Moho (Mohorovicic discontinuity) is a major interface or discontinuity that separates the crust and the mantle. In this research, automatic techniques to determine the depth of the Moho from the earth's surface (the crustal thickness H) and the ratio of crustal seismic P-wave velocity (Vp) to S-wave velocity (Vs) (kappa= Vp/Vs) were developed. In this dissertation, an optimization problem of inverting receiver functions has been developed to determine crustal parameters and the three associated weights using evolutionary and direct optimization techniques. The first technique developed makes use of the evolutionary Genetic Algorithms (GA) optimization technique. The second technique developed combines the direct Generalized Pattern Search (GPS) and evolutionary Fitness Proportionate Niching (FPN) techniques by employing their strengths. In a previous study, Monte Carlo technique has been utilized for determining variable weights in the H-kappa stacking of receiver functions. Compared to that previously introduced variable weights approach, the current GA and GPS-FPN techniques have tremendous advantages of saving time and these new techniques are suitable for automatic and simultaneous determination of crustal parameters and appropriate weights. The GA implementation provides optimal or near optimal weights necessary in stacking receiver functions as well as optimal H and kappa values simultaneously. Generally, the objective function of the H-kappa stacking problem

  8. Perspective: Codesign for materials science: An optimal learning approach

    NASA Astrophysics Data System (ADS)

    Lookman, Turab; Alexander, Francis J.; Bishop, Alan R.

    2016-05-01

    A key element of materials discovery and design is to learn from available data and prior knowledge to guide the next experiments or calculations in order to focus in on materials with targeted properties. We suggest that the tight coupling and feedback between experiments, theory and informatics demands a codesign approach, very reminiscent of computational codesign involving software and hardware in computer science. This requires dealing with a constrained optimization problem in which uncertainties are used to adaptively explore and exploit the predictions of a surrogate model to search the vast high dimensional space where the desired material may be found.

  9. Codon Optimizing for Increased Membrane Protein Production: A Minimalist Approach.

    PubMed

    Mirzadeh, Kiavash; Toddo, Stephen; Nørholm, Morten H H; Daley, Daniel O

    2016-01-01

    Reengineering a gene with synonymous codons is a popular approach for increasing production levels of recombinant proteins. Here we present a minimalist alternative to this method, which samples synonymous codons only at the second and third positions rather than the entire coding sequence. As demonstrated with two membrane-embedded transporters in Escherichia coli, the method was more effective than optimizing the entire coding sequence. The method we present is PCR based and requires three simple steps: (1) the design of two PCR primers, one of which is degenerate; (2) the amplification of a mini-library by PCR; and (3) screening for high-expressing clones. PMID:27485329

  10. An integrated source/mask/DSA optimization approach

    NASA Astrophysics Data System (ADS)

    Fühner, Tim; Michalak, Przemysław; Welling, Ulrich; Orozco-Rey, Juan Carlos; Müller, Marcus; Erdmann, Andreas

    2016-03-01

    The introduction of DSA for lithography is still obstructed by a number of technical issues including the lack of a comprehensive computational platform. This work presents a direct source/mask/DSA optimization (SMDSAO) method, which incorporates standard lithographic metrics and figures of merit such as the maximization of process windows. The procedure is demonstrated for a contact doubling example, assuming grapho-epitaxy-DSA. To retain a feasible runtime, a geometry-based Interface Hamiltonian DSA model is employed. The feasibility of this approach is demonstrated through several results and their comparison with more rigorous DSA models.

  11. Multiplicative approximations, optimal hypervolume distributions, and the choice of the reference point.

    PubMed

    Friedrich, Tobias; Neumann, Frank; Thyssen, Christian

    2015-01-01

    Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing [Formula: see text] points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms, we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations. PMID:24654679

  12. Optimal reconstruction of reaction rates from particle distributions

    NASA Astrophysics Data System (ADS)

    Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2010-05-01

    Random walk particle tracking methodologies to simulate solute transport of conservative species constitute an attractive alternative for their computational efficiency and absence of numerical dispersion. Yet, problems stemming from the reconstruction of concentrations from particle distributions have typically prevented its use in reactive transport problems. The numerical problem mainly arises from the need to first reconstruct the concentrations of species/components from a discrete number of particles, which is an error prone process, and then computing a spatial functional of the concentrations and/or its derivatives (either spatial or temporal). Errors are then propagated, so that common strategies to reconstruct this functional require an unfeasible amount of particles when dealing with nonlinear reactive transport problems. In this context, this article presents a methodology to directly reconstruct this functional based on kernel density estimators. The methodology mitigates the error propagation in the evaluation of the functional by avoiding the prior estimation of the actual concentrations of species. The multivariate kernel associated with the corresponding functional depends on the size of the support volume, which defines the area over which a given particle can influence the functional. The shape of the kernel functions and the size of the support volume determines the degree of smoothing, which is optimized to obtain the best unbiased predictor of the functional using an iterative plug-in support volume selector. We applied the methodology to directly reconstruct the reaction rates of a precipitation/dissolution problem involving the mixing of two different waters carrying two aqueous species in chemical equilibrium and moving through a randomly heterogeneous porous medium.

  13. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  14. Aerodynamic Shape Optimization Using A Combined Distributed/Shared Memory Paradigm

    NASA Technical Reports Server (NTRS)

    Cheung, Samson; Holst, Terry

    1999-01-01

    Current parallel computational approaches involve distributed and shared memory paradigms. In the distributed memory paradigm, each processor has its own independent memory. Message passing typically uses a function library such as MPI or PVM. In the shared memory paradigm, such as that used on the SGI Origin 2000 machine, compiler directives are used to instruct the compiler to schedule multiple threads to perform calculations. In this paradigm, it must be assured that processors (threads) do not simultaneously access regions of memory in such away that errors would occur. This paper utilizes the latest version of the SGI MPI function library to combine the two parallelization paradigms to perform aerodynamic shape optimization of a generic wing/body.

  15. Distributed Generators Allocation in Radial Distribution Systems with Load Growth using Loss Sensitivity Approach

    NASA Astrophysics Data System (ADS)

    Kumar, Ashwani; Vijay Babu, P.; Murty, V. V. S. N.

    2016-07-01

    Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of distributed generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. The objective of the paper is to reduce the power losses and improve the voltage profile of the radial distribution system with optimal allocation of the multiple DG in the system. The main contribution in this paper is (i) combined power loss sensitivity (CPLS) based method for multiple DG locations, (ii) determination of optimal sizes for multiple DG units at unity and lagging power factor, (iii) impact of DG installed at optimal, that is, combined load power factor on the system performance, (iv) impact of load growth on optimal DG planning, (v) Impact of DG integration in distribution systems on voltage stability index, (vi) Economic and technical Impact of DG integration in the distribution systems. The load growth factor has been considered in the study which is essential for planning and expansion of the existing systems. The technical and economic aspects are investigated in terms of improvement in voltage profile, reduction in total power losses, cost of energy loss, cost of power obtained from DG, cost of power intake from the substation, and savings in cost of energy loss. The results are obtained on IEEE 69-bus radial distribution systems and also compared with other existing methods.

  16. Optimization of minoxidil microemulsions using fractional factorial design approach.

    PubMed

    Jaipakdee, Napaphak; Limpongsa, Ekapol; Pongjanyakul, Thaned

    2016-01-01

    The objective of this study was to apply fractional factorial and multi-response optimization designs using desirability function approach for developing topical microemulsions. Minoxidil (MX) was used as a model drug. Limonene was used as an oil phase. Based on solubility, Tween 20 and caprylocaproyl polyoxyl-8 glycerides were selected as surfactants, propylene glycol and ethanol were selected as co-solvent in aqueous phase. Experiments were performed according to a two-level fractional factorial design to evaluate the effects of independent variables: Tween 20 concentration in surfactant system (X1), surfactant concentration (X2), ethanol concentration in co-solvent system (X3), limonene concentration (X4) on MX solubility (Y1), permeation flux (Y2), lag time (Y3), deposition (Y4) of MX microemulsions. It was found that Y1 increased with increasing X3 and decreasing X2, X4; whereas Y2 increased with decreasing X1, X2 and increasing X3. While Y3 was not affected by these variables, Y4 increased with decreasing X1, X2. Three regression equations were obtained and calculated for predicted values of responses Y1, Y2 and Y4. The predicted values matched experimental values reasonably well with high determination coefficient. By using optimal desirability function, optimized microemulsion demonstrating the highest MX solubility, permeation flux and skin deposition was confirmed as low level of X1, X2 and X4 but high level of X3. PMID:25318551

  17. An optimization approach for fitting canonical tensor decompositions.

    SciTech Connect

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  18. A linear programming model for optimizing HDR brachytherapy dose distributions with respect to mean dose in the DVH-tail

    SciTech Connect

    Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson

    2013-08-15

    Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.

  19. Optimal subinterval selection approach for power system transient stability simulation

    SciTech Connect

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.

  20. Optimal subinterval selection approach for power system transient stability simulation

    DOE PAGES

    Kim, Soobae; Overbye, Thomas J.

    2015-10-21

    Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less

  1. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  2. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    NASA Astrophysics Data System (ADS)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  3. Optimisation of polymer foam bubble expansion in extruder by resident time distribution approach

    NASA Astrophysics Data System (ADS)

    Larochette, Mathieu; Graebling, Didier; Léonardi, Frédéric

    2007-04-01

    In this work, we used the Residence Time Distribution (RTD) to study the polystyrene foaming during an extrusion process. The extruder associated with a gear pump is simply and quantitatively described by three continuoustly stirred tank reactors with recycling loops and one plug-flow reactor. The blowing agent used is CO2 and its obtained by thermal decomposition of a chemical blowing agent (CBA). This approach allows to optimize the density of the foam in accordance with the CBA kinetic of decomposition.

  4. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  5. Model optimization of orthotropic distributed-mode loudspeaker using attached masses.

    PubMed

    Lu, Guochao; Shen, Yong

    2009-11-01

    The orthotropic model of the plate is established and the genetic simulated annealing algorithm is developed for optimization of the mode distribution of the orthotropic plate. The experiment results indicate that the orthotropic model can simulate the real plate better. And optimization aimed at the equal distribution of the modes in the orthotropic model is made to improve the corresponding sound pressure responses.

  6. A systems biology approach to radiation therapy optimization.

    PubMed

    Brahme, Anders; Lind, Bengt K

    2010-05-01

    During the last 20 years, the field of cellular and not least molecular radiation biology has been developed substantially and can today describe the response of heterogeneous tumors and organized normal tissues to radiation therapy quite well. An increased understanding of the sub-cellular and molecular response is leading to a more general systems biological approach to radiation therapy and treatment optimization. It is interesting that most of the characteristics of the tissue infrastructure, such as the vascular system and the degree of hypoxia, have to be considered to get an accurate description of tumor and normal tissue responses to ionizing radiation. In the limited space available, only a brief description of some of the most important concepts and processes is possible, starting from the key functional genomics pathways of the cell that are not only responsible for tumor development but also responsible for the response of the cells to radiation therapy. The key mechanisms for cellular damage and damage repair are described. It is further more discussed how these processes can be brought to inactivate the tumor without severely damaging surrounding normal tissues using suitable radiation modalities like intensity-modulated radiation therapy (IMRT) or light ions. The use of such methods may lead to a truly scientific approach to radiation therapy optimization, particularly when invivo predictive assays of radiation responsiveness becomes clinically available at a larger scale. Brief examples of the efficiency of IMRT are also given showing how sensitive normal tissues can be spared at the same time as highly curative doses are delivered to a tumor that is often radiation resistant and located near organs at risk. This new approach maximizes the probability to eradicate the tumor, while at the same time, adverse reactions in sensitive normal tissues are as far as possible minimized using IMRT with photons and light ions. PMID:20191284

  7. Optimized Switch Allocation to Improve the Restoration Energy in Distribution Systems

    NASA Astrophysics Data System (ADS)

    Dezaki, Hamed H.; Abyaneh, Hossein A.; Agheli, Ali; Mazlumi, Kazem

    2012-01-01

    In distribution networks switching devices play critical role in energy restoration and improving reliability indices. This paper presents a novel objective function to optimally allocate switches in electric power distribution systems. Identifying the optimized location of the switches is a nonlinear programming problem (NLP). In the proposed objective function a new auxiliary function is used to simplify the calculation of the objective function. The output of the auxiliary function is binary. The genetic algorithm (GA) optimization method is used to solve this optimization problem. The proposed method is applied to a real distribution network and the results reveal that the method is successful.

  8. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  9. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time

  10. OPTIMAL SCHEDULING OF BOOSTER DISINFECTION IN WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Booster disinfection is the addition of disinfectant at locations distributed throughout a water distribution system. Such a strategy can reduce the mass of disinfectant required to maintain a detectable residual at points of consumption in the distribution system, which may lea...

  11. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  12. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074

  13. The dependence of optimal fractionation schemes on the spatial dose distribution

    NASA Astrophysics Data System (ADS)

    Unkelbach, Jan; Craft, David; Salari, Ehsan; Ramakrishnan, Jagdish; Bortfeld, Thomas

    2013-01-01

    We consider the fractionation problem in radiation therapy. Tumor sites in which the dose-limiting organ at risk (OAR) receives a substantially lower dose than the tumor, bear potential for hypofractionation even if the α/β-ratio of the tumor is larger than the α/β-ratio of the OAR. In this work, we analyze the interdependence of the optimal fractionation scheme and the spatial dose distribution in the OAR. In particular, we derive a criterion under which a hypofractionation regimen is indicated for both a parallel and a serial OAR. The approach is based on the concept of the biologically effective dose (BED). For a hypothetical homogeneously irradiated OAR, it has been shown that hypofractionation is suggested by the BED model if the α/β-ratio of the OAR is larger than α/β-ratio of the tumor times the sparing factor, i.e. the ratio of the dose received by the tumor and the OAR. In this work, we generalize this result to inhomogeneous dose distributions in the OAR. For a parallel OAR, we determine the optimal fractionation scheme by minimizing the integral BED in the OAR for a fixed BED in the tumor. For a serial structure, we minimize the maximum BED in the OAR. This leads to analytical expressions for an effective sparing factor for the OAR, which provides a criterion for hypofractionation. The implications of the model are discussed for lung tumor treatments. It is shown that the model supports hypofractionation for small tumors treated with rotation therapy, i.e. highly conformal techniques where a large volume of lung tissue is exposed to low but nonzero dose. For larger tumors, the model suggests hyperfractionation. We further discuss several non-intuitive interdependencies between optimal fractionation and the spatial dose distribution. For instance, lowering the dose in the lung via proton therapy does not necessarily provide a biological rationale for hypofractionation.

  14. A GENERALIZED STOCHASTIC COLLOCATION APPROACH TO CONSTRAINED OPTIMIZATION FOR RANDOM DATA IDENTIFICATION PROBLEMS

    SciTech Connect

    Webster, Clayton G; Gunzburger, Max D

    2013-01-01

    We present a scalable, parallel mechanism for stochastic identification/control for problems constrained by partial differential equations with random input data. Several identification objectives will be discussed that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate $L^p$ norm, and the distributed parameters/control can both deterministic or stochastic. Given an objective we prove the existence of an optimal solution, establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations. The modeling process may describe the solution in terms of high dimensional spaces, particularly in the case when the input data (coefficients, forcing terms, boundary conditions, geometry, etc) are affected by a large amount of uncertainty. For higher accuracy, the computer simulation must increase the number of random variables (dimensions), and expend more effort approximating the quantity of interest in each individual dimension. Hence, we introduce a novel stochastic parameter identification algorithm that integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation FEM approach. This allows for decoupled, moderately high dimensional, parameterized computations of the stochastic optimality system, where at each collocation point, deterministic analysis and techniques can be utilized. The advantage of our approach is that it allows for the optimal identification of statistical moments (mean value, variance, covariance, etc.) or even the whole probability distribution of the input random fields, given the probability distribution of some responses of the system (quantities of physical interest). Our rigorously derived error estimates, for the fully discrete problems, will be described and used to compare the efficiency of the method with several other techniques. Numerical examples illustrate the theoretical

  15. Optimal Placement of Distributed Generation Units in a Distribution System with Uncertain Topologies using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Donadel, Clainer Bravin; Fardin, Jussara Farias; Encarnação, Lucas Frizera

    2015-10-01

    In the literature, several papers propose new methodologies to determine the optimal placement/sizing of medium size Distributed Generation Units (DGs), using heuristic algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). However, in all methodologies, the optimal placement solution is strongly dependent of network topologies. Therefore, a specific solution is valid only for a particular network topology. Furthermore, such methodologies does not consider the presence of small DGs, whose connection point cannot be defined by Distribution Network Operators (DNOs). In this paper it is proposed a new methodology to determine the optimal location of medium size DGs in a distribution system with uncertain topologies, considering the particular behavior of small DGs, using Monte Carlo Simulation.

  16. Wireless Sensing, Monitoring and Optimization for Campus-Wide Steam Distribution

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Woodworth, Ken; Lake, Joe E

    2011-11-01

    The US Congress has passed legislation dictating that all government agencies establish a plan and process for improving energy efficiencies at their sites. In response to this legislation, Oak Ridge National Laboratory (ORNL) has recently conducted a pilot study to explore the deployment of a wireless sensor system for a real-time measurement-based energy efficiency optimization. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize energy delivery within the steam distribution system. Our approach leverages an integrated wireless sensor and real-time monitoring capability. We make real time state assessment on the steam trap health and steam flow estimate of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observing measurements of these sensors with state estimators for system health. Our assessments are based on a spectral-based energy signature scheme that interprets acoustic vibration sensor data to estimate steam flow rates and assess steam traps status. Experimental results show that the energy signature scheme has the potential to identify different steam trap states and it has sufficient sensitivity to estimate flow rate. Moreover, results indicate a nearly quadratic relationship over the test region between the overall energy signature factor and flow rate in the pipe. We are able to present the steam flow and steam trap status, sensor readings, and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. The goal is to achieve significant energy-saving in steam lines by monitoring and acting on leaking steam pipes/traps/valves. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  17. The Poisson Distribution: An Experimental Approach to Teaching Statistics

    ERIC Educational Resources Information Center

    Lafleur, Mimi S.; And Others

    1972-01-01

    Explains an experimental approach to teaching statistics to students who are essentially either non-science and non-mathematics majors or just beginning study of science. With every day examples, the article illustrates the method of teaching Poisson Distribution. (PS)

  18. Recent progress in the statistical approach of parton distributions

    SciTech Connect

    Soffer, Jacques

    2011-07-15

    We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.

  19. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  20. [OPTIMAL APPROACH TO COMBINED TREATMENT OF PATIENTS WITH UROGENITAL PAPILLOMATOSIS].

    PubMed

    Breusov, A A; Kulchavenya, E V; Brizhatyukl, E V; Filimonov, P N

    2015-01-01

    The review analyzed 59 sources of domestic and foreign literature on the use of immunomodulator izoprinozin in treating patients infected with human papilloma virus, and the results of their own experience. The high prevalence of HPV and its role in the development of cervical cancer are shown, the mechanisms of HPV development and the host protection from this infection are described. The authors present approaches to the treatment of HPV-infected patients with particular attention to izoprinozin. Isoprinosine belongs to immunomodulators with antiviral activity. It inhibits the replication of viral DNA and RNA by binding to cell ribosomes and changing their stereochemical structure. HPV infection, especially in the early stages, may be successfully cured till the complete elimination of the virus. Inosine Pranobex (izoprinozin) having dual action and the most abundant evidence base, may be recognized as the optimal treatment option. PMID:26859953

  1. An analytic approach to optimize tidal turbine fields

    NASA Astrophysics Data System (ADS)

    Pelz, P.; Metzler, M.

    2013-12-01

    Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.

  2. An optimization approach and its application to compare DNA sequences

    NASA Astrophysics Data System (ADS)

    Liu, Liwei; Li, Chao; Bai, Fenglan; Zhao, Qi; Wang, Ying

    2015-02-01

    Studying the evolutionary relationship between biological sequences has become one of the main tasks in bioinformatics research by means of comparing and analyzing the gene sequence. Many valid methods have been applied to the DNA sequence alignment. In this paper, we propose a novel comparing method based on the Lempel-Ziv (LZ) complexity to compare biological sequences. Moreover, we introduce a new distance measure and make use of the corresponding similarity matrix to construct phylogenic tree without multiple sequence alignment. Further, we construct phylogenic tree for 24 species of Eutherian mammals and 48 countries of Hepatitis E virus (HEV) by an optimization approach. The results indicate that this new method improves the efficiency of sequence comparison and successfully construct phylogenies.

  3. Approaches of Russian oil companies to optimal capital structure

    NASA Astrophysics Data System (ADS)

    Ishuk, T.; Ulyanova, O.; Savchitz, V.

    2015-11-01

    Oil companies play a vital role in Russian economy. Demand for hydrocarbon products will be increasing for the nearest decades simultaneously with the population growth and social needs. Change of raw-material orientation of Russian economy and the transition to the innovative way of the development do not exclude the development of oil industry in future. Moreover, society believes that this sector must bring the Russian economy on to the road of innovative development due to neo-industrialization. To achieve this, the government power as well as capital management of companies are required. To make their optimal capital structure, it is necessary to minimize the capital cost, decrease definite risks under existing limits, and maximize profitability. The capital structure analysis of Russian and foreign oil companies shows different approaches, reasons, as well as conditions and, consequently, equity capital and debt capital relationship and their cost, which demands the effective capital management strategy.

  4. Selective optimization of side activities: the SOSA approach.

    PubMed

    Wermuth, Camille G

    2006-02-01

    Selective optimization of side activities of drug molecules (the SOSA approach) is an intelligent and potentially more efficient strategy than HTS for the generation of new biological activities. Only a limited number of highly diverse drug molecules are screened, for which bioavailability and toxicity studies have already been performed and efficacy in humans has been confirmed. Once the screening has generated a hit it will be used as the starting point for a drug discovery program. Using traditional medicinal chemistry as well as parallel synthesis, the initial 'side activity' is transformed into the 'main activity' and, conversely, the initial 'main activity' is significantly reduced or abolished. This strategy has a high probability of yielding safe, bioavailable, original and patentable analogues. PMID:16533714

  5. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  6. An Improved Ant Colony Optimization Approach for Optimization of Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Ding, Haimin

    2014-01-01

    Computer-aided process planning (CAPP) is an important interface between computer-aided design (CAD) and computer-aided manufacturing (CAM) in computer-integrated manufacturing environments (CIMs). In this paper, process planning problem is described based on a weighted graph, and an ant colony optimization (ACO) approach is improved to deal with it effectively. The weighted graph consists of nodes, directed arcs, and undirected arcs, which denote operations, precedence constraints among operation, and the possible visited path among operations, respectively. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPCs). A pheromone updating strategy proposed in this paper is incorporated in the standard ACO, which includes Global Update Rule and Local Update Rule. A simple method by controlling the repeated number of the same process plans is designed to avoid the local convergence. A case has been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been carried out to validate the feasibility and efficiency of the proposed approach. PMID:25097874

  7. On-Line Monitoring of Plan Execution: a Distributed Approach

    NASA Astrophysics Data System (ADS)

    Micalizio, Roberto; Torasso, Pietro

    The paper introduces and formalizes a distributed approach for the model-based monitoring of the execution of a plan, where concurrent actions are carried on by a team of mobile robots in a partially observable environment. Each robot is monitored on-line by an agent that has the task of tracking all the possible evolutions both under nominal and faulty behavior of the robot and to estimate the belief state at each timeinstant. The strategy for deriving local solutions which are globally consistent is formalized. The distributed monitoring provides on-line feedback to a system supervisor which has to decide whether building a new plan as consequence of actions failure. The feasibility of the approach and the gain in the performance are shown by comparing experimental results of the proposed approach with a centralized one.

  8. Joint layout, pipe size and hydraulic reliability optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Tanyimboh, Tiku; Setiadi, Yohan

    2008-08-01

    A multicriteria maximum-entropy approach to the joint layout, pipe size and reliability optimization of water distribution systems is presented. The capital cost of the system is taken as the principal criterion, and so the trade-offs between cost, entropy, reliability and redundancy are examined sequentially in a large population of optimal solutions. The novelty of the method stems from the use of the maximum-entropy value as a preliminary filter, which screens out a large proportion of the candidate layouts at an early stage of the process before the designs and their reliability values are actually obtained. This technique, which is based on the notion that the entropy is potentially a robust hydraulic reliability measure, contributes greatly to the efficiency of the proposed method. The use of head-dependent modelling for simulating pipe failure conditions in the reliability calculations also complements the method in locating the Pareto-optimal front. The computational efficiency, robustness, accuracy and other advantages of the proposed method are demonstrated by application to a sample network.

  9. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  10. The determination and optimization of (rutile) pigment particle size distributions

    NASA Technical Reports Server (NTRS)

    Richards, L. W.

    1972-01-01

    A light scattering particle size test which can be used with materials having a broad particle size distribution is described. This test is useful for pigments. The relation between the particle size distribution of a rutile pigment and its optical performance in a gray tint test at low pigment concentration is calculated and compared with experimental data.

  11. Mapping the distribution of malaria: current approaches and future directions

    USGS Publications Warehouse

    Johnson, Leah R.; Lafferty, Kevin D.; McNally, Amy; Mordecai, Erin A.; Paaijmans, Krijn P.; Pawar, Samraat; Ryan, Sadie J.; Chen, Dongmei; Moulin, Bernard; Wu, Jianhong

    2015-01-01

    Mapping the distribution of malaria has received substantial attention because the disease is a major source of illness and mortality in humans, especially in developing countries. It also has a defined temporal and spatial distribution. The distribution of malaria is most influenced by its mosquito vector, which is sensitive to extrinsic environmental factors such as rainfall and temperature. Temperature also affects the development rate of the malaria parasite in the mosquito. Here, we review the range of approaches used to model the distribution of malaria, from spatially explicit to implicit, mechanistic to correlative. Although current methods have significantly improved our understanding of the factors influencing malaria transmission, significant gaps remain, particularly in incorporating nonlinear responses to temperature and temperature variability. We highlight new methods to tackle these gaps and to integrate new data with models.

  12. Optimizing algal cultivation & productivity : an innovative, multidiscipline, and multiscale approach.

    SciTech Connect

    Murton, Jaclyn K.; Hanson, David T.; Turner, Tom; Powell, Amy Jo; James, Scott Carlton; Timlin, Jerilyn Ann; Scholle, Steven; August, Andrew; Dwyer, Brian P.; Ruffing, Anne; Jones, Howland D. T.; Ricken, James Bryce; Reichardt, Thomas A.

    2010-04-01

    Progress in algal biofuels has been limited by significant knowledge gaps in algal biology, particularly as they relate to scale-up. To address this we are investigating how culture composition dynamics (light as well as biotic and abiotic stressors) describe key biochemical indicators of algal health: growth rate, photosynthetic electron transport, and lipid production. Our approach combines traditional algal physiology with genomics, bioanalytical spectroscopy, chemical imaging, remote sensing, and computational modeling to provide an improved fundamental understanding of algal cell biology across multiple cultures scales. This work spans investigations from the single-cell level to ensemble measurements of algal cell cultures at the laboratory benchtop to large greenhouse scale (175 gal). We will discuss the advantages of this novel, multidisciplinary strategy and emphasize the importance of developing an integrated toolkit to provide sensitive, selective methods for detecting early fluctuations in algal health, productivity, and population diversity. Progress in several areas will be summarized including identification of spectroscopic signatures for algal culture composition, stress level, and lipid production enabled by non-invasive spectroscopic monitoring of the photosynthetic and photoprotective pigments at the single-cell and bulk-culture scales. Early experiments compare and contrast the well-studied green algae chlamydomonas with two potential production strains of microalgae, nannochloropsis and dunnaliella, under optimal and stressed conditions. This integrated approach has the potential for broad impact on algal biofuels and bioenergy and several of these opportunities will be discussed.

  13. Approach to optimal care at end of life.

    PubMed

    Nichols, K J

    2001-10-01

    At no other time in any patient's life is the team approach to care more important than at the end of life. The demands and challenges of end-of-life care (ELC) tax all physicians at some point. There is no other profession that is charged with this ultimate responsibility. No discipline in medicine is immune to the issues of end-of-life care except perhaps, ironically, pathology. This presentation addresses the issues, options, and challenges of providing optimal care at the end of life. It looks at the principles of ELC, barriers to good ELC, and what patients and families expect from ELC. Barriers to ELC include financial restrictions, inadequate care-givers, community support, legal issues, legislative issues, training needs, coordination of care, hospice care, and transitions for the patients and families. The legal aspects of physician-assisted suicide is presented as well as the approach of the American Osteopathic Association to ensure better education for physicians in the principles of ELC. PMID:11681166

  14. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  15. Optimizing distance image quality of an aspheric multifocal intraocular lens using a comprehensive statistical design approach.

    PubMed

    Hong, Xin; Zhang, Xiaoxiao

    2008-12-01

    The AcrySof ReSTOR intraocular lens (IOL) is a multifocal lens with state-of-the-art apodized diffractive technology, and is indicated for visual correction of aphakia secondary to removal of cataractous lenses in adult patients with/without presbyopia, who desire near, intermediate, and distance vision with increased spectacle independence. The multifocal design results in some optical contrast reduction, which may be improved by reducing spherical aberration. A novel patent-pending approach was undertaken to investigate the optical performance of aspheric lens designs. Simulated eyes using human normal distributions were corrected with different lens designs in a Monte Carlo simulation that allowed for variability in multiple surgical parameters (e.g. positioning error, biometric variation). Monte Carlo optimized results indicated that a lens spherical aberration of -0.10 microm provided optimal distance image quality.

  16. Practical Framework for an Electron Beam Induced Current Technique Based on a Numerical Optimization Approach

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Hideshi; Soeda, Takeshi

    2015-03-01

    A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.

  17. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  18. Exploring trade-offs between VMAT dose quality and delivery efficiency using a network optimization approach

    NASA Astrophysics Data System (ADS)

    Salari, Ehsan; Wala, Jeremiah; Craft, David

    2012-09-01

    To formulate and solve the fluence-map merging procedure of the recently-published VMAT treatment-plan optimization method, called vmerge, as a bi-criteria optimization problem. Using an exact merging method rather than the previously-used heuristic, we are able to better characterize the trade-off between the delivery efficiency and dose quality. vmerge begins with a solution of the fluence-map optimization problem with 180 equi-spaced beams that yields the ‘ideal’ dose distribution. Neighboring fluence maps are then successively merged, meaning that they are added together and delivered as a single map. The merging process improves the delivery efficiency at the expense of deviating from the initial high-quality dose distribution. We replace the original merging heuristic by considering the merging problem as a discrete bi-criteria optimization problem with the objectives of maximizing the treatment efficiency and minimizing the deviation from the ideal dose. We formulate this using a network-flow model that represents the merging problem. Since the problem is discrete and thus non-convex, we employ a customized box algorithm to characterize the Pareto frontier. The Pareto frontier is then used as a benchmark to evaluate the performance of the standard vmerge algorithm as well as two other similar heuristics. We test the exact and heuristic merging approaches on a pancreas and a prostate cancer case. For both cases, the shape of the Pareto frontier suggests that starting from a high-quality plan, we can obtain efficient VMAT plans through merging neighboring fluence maps without substantially deviating from the initial dose distribution. The trade-off curves obtained by the various heuristics are contrasted and shown to all be equally capable of initial plan simplifications, but to deviate in quality for more drastic efficiency improvements. This work presents a network optimization approach to the merging problem. Contrasting the trade-off curves of the

  19. Use of marginal distributions constrained optimization (MADCO) for accelerated 2D MRI relaxometry and diffusometry

    NASA Astrophysics Data System (ADS)

    Benjamini, Dan; Basser, Peter J.

    2016-10-01

    Measuring multidimensional (e.g., 2D) relaxation spectra in NMR and MRI clinical applications is a holy grail of the porous media and biomedical MR communities. The main bottleneck is the inversion of Fredholm integrals of the first kind, an ill-conditioned problem requiring large amounts of data to stabilize a solution. We suggest a novel experimental design and processing framework to accelerate and improve the reconstruction of such 2D spectra that uses a priori information from the 1D projections of spectra, or marginal distributions. These 1D marginal distributions provide powerful constraints when 2D spectra are reconstructed, and their estimation requires an order of magnitude less data than a conventional 2D approach. This marginal distributions constrained optimization (MADCO) methodology is demonstrated here with a polyvinylpyrrolidone-water phantom that has 3 distinct peaks in the 2D D-T1 space. The stability, sensitivity to experimental parameters, and accuracy of this new approach are compared with conventional methods by serially subsampling the full data set. While the conventional, unconstrained approach performed poorly, the new method had proven to be highly accurate and robust, only requiring a fraction of the data. Additionally, synthetic T1 -T2 data are presented to explore the effects of noise on the estimations, and the performance of the proposed method with a smooth and realistic 2D spectrum. The proposed framework is quite general and can also be used with a variety of 2D MRI experiments (D-T2,T1 -T2, D -D, etc.), making these potentially feasible for preclinical and even clinical applications for the first time.

  20. Use of marginal distributions constrained optimization (MADCO) for accelerated 2D MRI relaxometry and diffusometry.

    PubMed

    Benjamini, Dan; Basser, Peter J

    2016-10-01

    Measuring multidimensional (e.g., 2D) relaxation spectra in NMR and MRI clinical applications is a holy grail of the porous media and biomedical MR communities. The main bottleneck is the inversion of Fredholm integrals of the first kind, an ill-conditioned problem requiring large amounts of data to stabilize a solution. We suggest a novel experimental design and processing framework to accelerate and improve the reconstruction of such 2D spectra that uses a priori information from the 1D projections of spectra, or marginal distributions. These 1D marginal distributions provide powerful constraints when 2D spectra are reconstructed, and their estimation requires an order of magnitude less data than a conventional 2D approach. This marginal distributions constrained optimization (MADCO) methodology is demonstrated here with a polyvinylpyrrolidone-water phantom that has 3 distinct peaks in the 2D D-T1 space. The stability, sensitivity to experimental parameters, and accuracy of this new approach are compared with conventional methods by serially subsampling the full data set. While the conventional, unconstrained approach performed poorly, the new method had proven to be highly accurate and robust, only requiring a fraction of the data. Additionally, synthetic T1-T2 data are presented to explore the effects of noise on the estimations, and the performance of the proposed method with a smooth and realistic 2D spectrum. The proposed framework is quite general and can also be used with a variety of 2D MRI experiments (D-T2,T1-T2,D-D, etc.), making these potentially feasible for preclinical and even clinical applications for the first time. PMID:27543810

  1. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  2. On the practical convergence of coda-based correlations: a window optimization approach

    NASA Astrophysics Data System (ADS)

    Chaput, J.; Clerc, V.; Campillo, M.; Roux, P.; Knox, H.

    2016-02-01

    We present a novel optimization approach to improve the convergence of interstation coda correlation functions towards the medium's empirical Green's function. For two stations recording a series of impulsive events in a multiply scattering medium, we explore the impact of coda window selection through a Markov Chain Monte Carlo scheme, with the aim of generating a gather of correlation functions that is the most coherent and symmetric over events, thus recovering intuitive elements of the interstation Green's function without any nonlinear post-processing techniques. This approach is tested here for a 2-D acoustic finite difference model, where a much improved correlation function is obtained, as well as for a database of small impulsive icequakes recorded on Erebus Volcano, Antarctica, where similar robust results are shown. The average coda solutions, as deduced from the posterior probability distributions of the optimization, are further representative of the scattering strength of the medium, with stronger scattering resulting in a slightly delayed overall coda sampling. The recovery of singly scattered arrivals in the coda of correlation functions are also shown to be possible through this approach, and surface wave reflections from outer craters on Erebus volcano were mapped in this fashion. We also note that, due to the improvement of correlation functions over subsequent events, this approach can further be used to improve the resolution of passive temporal monitoring.

  3. Biological optimization of heterogeneous dose distributions in systemic radiotherapy

    SciTech Connect

    Strigari, Lidia; D'Andrea, Marco; Maini, Carlo Ludovico; Sciuto, Rosa; Benassi, Marcello

    2006-06-15

    The standard computational method developed for internal radiation dosimetry is the MIRD (medical internal radiation dose) formalism, based on the assumption that tumor control is given by uniform dose and activity distributions. In modern systemic radiotherapy, however, the need for full 3D dose calculations that take into account the heterogeneous distribution of activity in the patient is now understood. When information on nonuniform distribution of activity becomes available from functional imaging, a more patient specific 3D dosimetry can be performed. Application of radiobiological models can be useful to correlate the calculated heterogeneous dose distributions to the current knowledge on tumor control probability of a homogeneous dose distribution. Our contribution to this field is the introduction of a parameter, the F factor, already used by our group in studying external beam radiotherapy treatments. This parameter allows one to write a simplified expression for tumor control probability (TCP) based on the standard linear quadratic (LQ) model and Poisson statistics. The LQ model was extended to include different treatment regimes involving source decay, incorporating the repair '{mu}' of sublethal radiation damage, the relative biological effectiveness and the effective 'waste' of dose delivered when repopulation occurs. The sensitivity of the F factor against radiobiological parameters ({alpha},{beta},{mu}) and the influence of the dose volume distribution was evaluated. Some test examples for {sup 131}I and {sup 90}Y labeled pharmaceuticals are described to further explain the properties of the F factor and its potential applications. To demonstrate dosimetric feasibility and advantages of the proposed F factor formalism in systemic radiotherapy, we have performed a retrospective planning study on selected patient case. F factor formalism helps to assess the total activity to be administered to the patient taking into account the heterogeneity in

  4. Distributed Generation Dispatch Optimization under VariousElectricity Tariffs

    SciTech Connect

    Firestone, Ryan; Marnay, Chris

    2007-05-01

    The on-site generation of electricity can offer buildingowners and occupiers financial benefits as well as social benefits suchas reduced grid congestion, improved energy efficiency, and reducedgreenhouse gas emissions. Combined heat and power (CHP), or cogeneration,systems make use of the waste heat from the generator for site heatingneeds. Real-time optimal dispatch of CHP systems is difficult todetermine because of complicated electricity tariffs and uncertainty inCHP equipment availability, energy prices, and system loads. Typically,CHP systems use simple heuristic control strategies. This paper describesa method of determining optimal control in real-time and applies it to alight industrial site in San Diego, California, to examine: 1) the addedbenefit of optimal over heuristic controls, 2) the price elasticity ofthe system, and 3) the site-attributable greenhouse gas emissions, allunder three different tariff structures. Results suggest that heuristiccontrols are adequate under the current tariff structure and relativelyhigh electricity prices, capturing 97 percent of the value of thedistributed generation system. Even more value could be captured bysimply not running the CHP system during times of unusually high naturalgas prices. Under hypothetical real-time pricing of electricity,heuristic controls would capture only 70 percent of the value ofdistributed generation.

  5. Communication Optimizations for a Wireless Distributed Prognostic Framework

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2009-01-01

    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.

  6. Nonpoint source pollution: a distributed water quality modeling approach.

    PubMed

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2001-03-01

    A distributed water quality model for nonpoint source pollution modeling in agricultural watersheds is described in this paper. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. With data extracted using Geographical Information Systems (GIS) technology for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. The transferability of model parameters to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse modeling. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve nonpoint source modeling at the watershed-scale level.

  7. Assay optimization: a statistical design of experiments approach.

    PubMed

    Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B

    2007-03-01

    With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.

  8. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    PubMed

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402

  9. Ant Colony Optimization Algorithm for Continuous Domains Based on Position Distribution Model of Ant Colony Foraging

    PubMed Central

    Liu, Liqiang; Dai, Yuntao

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm. PMID:24955402

  10. Stochastic approach to reconstruction of dynamical systems: optimal model selection criterion

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E. M.; Feigin, A. M.

    2011-12-01

    Most of known observable systems are complex and high-dimensional that doesn't allow to make the exact long-term forecast of their behavior. The stochastic approach to reconstruction of such systems gives a hope to describe important qualitative features of their behavior in a low-dimensional way while all other dynamics is modelled as stochastic disturbance. This report is devoted to application of Bayesian evidence for optimal stochastic model selection when reconstructing the evolution operator of observable system. The idea of Bayesian evidence is to find compromise between the model predictiveness and quality of fitting the model into the data. We represent the evolution operator of investigated system in a form of random dynamic system including deterministic and stochastic parts, both parameterized by artificial neural network. Then we use Bayesian evidence criterion to estimate optimal complexity of the model, i.e. both number of parameters and dimension corresponding to most probable model given the data. We demonstrate on the number of model examples that the model with non-uniformly distributed stochastic part (which corresponds to non-Gaussian perturbations of evolution operator) is optimal in general case. Further, we show that simple stochastic model can be the most preferred for reconstruction of the evolution operator underlying complex observed dynamics even in a case of deterministic high-dimensional system. Workability of suggested approach for modeling and prognosis of real-measured geophysical dynamics is investigated.

  11. A Rawlsian Approach to Distribute Responsibilities in Networks

    PubMed Central

    2009-01-01

    Due to their non-hierarchical structure, socio-technical networks are prone to the occurrence of the problem of many hands. In the present paper an approach is introduced in which people’s opinions on responsibility are empirically traced. The approach is based on the Rawlsian concept of Wide Reflective Equilibrium (WRE) in which people’s considered judgments on a case are reflectively weighed against moral principles and background theories, ideally leading to a state of equilibrium. Application of the method to a hypothetical case with an artificially constructed network showed that it is possible to uncover the relevant data to assess a consensus amongst people in terms of their individual WRE. It appeared that the moral background theories people endorse are not predictive for their actual distribution of responsibilities but that they indicate ways of reasoning and justifying outcomes. Two ways of ascribing responsibilities were discerned, corresponding to two requirements of a desirable responsibility distribution: fairness and completeness. Applying the method triggered learning effects, both with regard to conceptual clarification and moral considerations, and in the sense that it led to some convergence of opinions. It is recommended to apply the method to a real engineering case in order to see whether this approach leads to an overlapping consensus on a responsibility distribution which is justifiable to all and in which no responsibilities are left unfulfilled, therewith trying to contribute to the solution of the problem of many hands. PMID:19626463

  12. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  13. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, Daniel P.; Craig, James I.; Fulton, Robert E.; Mistree, Farrokh

    1999-01-01

    New approaches to MDO have been developed and demonstrated during this project on a particularly challenging aeronautics problem- HSCT Aeroelastic Wing Design. To tackle this problem required the integration of resources and collaboration from three Georgia Tech laboratories: ASDL, SDL, and PPRL, along with close coordination and participation from industry. Its success can also be contributed to the close interaction and involvement of fellows from the NASA Multidisciplinary Analysis and Optimization (MAO) program, which was going on in parallel, and provided additional resources to work the very complex, multidisciplinary problem, along with the methods being developed. The development of the Integrated Design Engineering Simulator (IDES) and its initial demonstration is a necessary first step in transitioning the methods and tools developed to larger industrial sized problems of interest. It also provides a framework for the implementation and demonstration of the methodology. Attachment: Appendix A - List of publications. Appendix B - Year 1 report. Appendix C - Year 2 report. Appendix D - Year 3 report. Appendix E - accompanying CDROM.

  14. A systematic approach: optimization of healthcare operations with knowledge management.

    PubMed

    Wickramasinghe, Nilmini; Bali, Rajeev K; Gibbons, M Chris; Choi, J H James; Schaffer, Jonathan L

    2009-01-01

    Effective decision making is vital in all healthcare activities. While this decision making is typically complex and unstructured, it requires the decision maker to gather multispectral data and information in order to make an effective choice when faced with numerous options. Unstructured decision making in dynamic and complex environments is challenging and in almost every situation the decision maker is undoubtedly faced with information inferiority. The need for germane knowledge, pertinent information and relevant data are critical and hence the value of harnessing knowledge and embracing the tools, techniques, technologies and tactics of knowledge management are essential to ensuring efficiency and efficacy in the decision making process. The systematic approach and application of knowledge management (KM) principles and tools can provide the necessary foundation for improving the decision making processes in healthcare. A combination of Boyd's OODA Loop (Observe, Orient, Decide, Act) and the Intelligence Continuum provide an integrated, systematic and dynamic model for ensuring that the healthcare decision maker is always provided with the appropriate and necessary knowledge elements that will help to ensure that healthcare decision making process outcomes are optimized for maximal patient benefit. The example of orthopaedic operating room processes will illustrate the application of the integrated model to support effective decision making in the clinical environment.

  15. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.

  16. Optimizing Distributed Energy Resources and building retrofits with the strategic DER-CAModel

    DOE PAGES

    Stadler, M.; Groissböck, M.; Cardoso, G.; Marnay, C.

    2014-08-05

    The pressuring need to reduce the import of fossil fuels as well as the need to dramatically reduce CO2 emissions in Europe motivated the European Commission (EC) to implement several regulations directed to building owners. Most of these regulations focus on increasing the number of energy efficient buildings, both new and retrofitted, since retrofits play an important role in energy efficiency. Overall, this initiative results from the realization that buildings will have a significant impact in fulfilling the 20/20/20-goals of reducing the greenhouse gas emissions by 20%, increasing energy efficiency by 20%, and increasing the share of renewables to 20%,more » all by 2020. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is an optimization tool used to support DER investment decisions, typically by minimizing total annual costs or CO2 emissions while providing energy services to a given building or microgrid site. This document shows enhancements made to DER-CAM to consider building retrofit measures along with DER investment options. Specifically, building shell improvement options have been added to DER-CAM as alternative or complementary options to investments in other DER such as PV, solar thermal, combined heat and power, or energy storage. The extension of the mathematical formulation required by the new features introduced in DER-CAM is presented and the resulting model is demonstrated at an Austrian Campus building by comparing DER-CAM results with and without building shell improvement options. Strategic investment results are presented and compared to the observed investment decision at the test site. Results obtained considering building shell improvement options suggest an optimal weighted average U value of about 0.53 W/(m2K) for the test site. This result is approximately 25% higher than what is currently observed in the building, suggesting that the retrofits made in 2002 were not optimal. Furthermore, the results obtained with

  17. Optimizing Distributed Energy Resources and building retrofits with the strategic DER-CAModel

    SciTech Connect

    Stadler, M.; Groissböck, M.; Cardoso, G.; Marnay, C.

    2014-08-05

    The pressuring need to reduce the import of fossil fuels as well as the need to dramatically reduce CO2 emissions in Europe motivated the European Commission (EC) to implement several regulations directed to building owners. Most of these regulations focus on increasing the number of energy efficient buildings, both new and retrofitted, since retrofits play an important role in energy efficiency. Overall, this initiative results from the realization that buildings will have a significant impact in fulfilling the 20/20/20-goals of reducing the greenhouse gas emissions by 20%, increasing energy efficiency by 20%, and increasing the share of renewables to 20%, all by 2020. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is an optimization tool used to support DER investment decisions, typically by minimizing total annual costs or CO2 emissions while providing energy services to a given building or microgrid site. This document shows enhancements made to DER-CAM to consider building retrofit measures along with DER investment options. Specifically, building shell improvement options have been added to DER-CAM as alternative or complementary options to investments in other DER such as PV, solar thermal, combined heat and power, or energy storage. The extension of the mathematical formulation required by the new features introduced in DER-CAM is presented and the resulting model is demonstrated at an Austrian Campus building by comparing DER-CAM results with and without building shell improvement options. Strategic investment results are presented and compared to the observed investment decision at the test site. Results obtained considering building shell improvement options suggest an optimal weighted average U value of about 0.53 W/(m2K) for the test site. This result is approximately 25% higher than what is currently observed in the building, suggesting that the retrofits made in 2002 were not optimal. Furthermore

  18. Optimization of orthotropic distributed-mode loudspeaker using attached masses and multi-exciters.

    PubMed

    Lu, Guochao; Shen, Yong; Liu, Ziyun

    2012-02-01

    Based on the orthotropic model of the plate, the method to optimize the sound response of the distributed-mode loudspeaker (DML) using the attached masses and the multi-exciters has been investigated. The attached masses method will rebuild the modes distribution of the plate, based on which multi-exciter method will smooth the sound response. The results indicate that the method can be used to optimize the sound response of the DML.

  19. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  20. Curricular Policy as a Collective Effects Problem: A Distributional Approach

    PubMed Central

    Penner, Andrew M.; Domina, Thurston; Penner, Emily K.; Conley, AnneMarie

    2015-01-01

    Current educational policies in the United States attempt to boost student achievement and promote equality by intensifying the curriculum and exposing students to more advanced coursework. This paper investigates the relationship between one such effort -- California's push to enroll all 8th grade students in Algebra -- and the distribution of student achievement. We suggest that this effort is an instance of a “collective effects” problem, where the population-level effects of a policy are different from its effects at the individual level. In such contexts, we argue that it is important to consider broader population effects as well as the difference between “treated” and “untreated” individuals. To do so, we present differences in inverse propensity score weighted distributions to investigate how this curricular policy changed the distribution of student achievement more broadly. We find that California's attempt to intensify the curriculum did not raise test scores at the bottom of the distribution, but did lower scores at the top of the distribution. These results highlight the efficacy of inverse propensity score weighting approaches for estimating collective effects, and provide a cautionary tale for curricular intensification efforts and other policies with collective effects. PMID:26004485

  1. Optimization of distribution transformer efficiency characteristics. Final report, March 1979

    SciTech Connect

    Not Available

    1980-06-01

    A method for distribution transformer loss evaluation was derived. The total levalized annual cost method was used and was extended to account properly for conditions of energy cost inflation, peak load growth, and transformer changeout during the evaluation period. The loss costs included were the no-load and load power losses, no-load and load reactive losses, and the energy cost of regulation. The demand and energy components of loss costs were treated separately to account correctly for the diversity of load losses and energy cost inflation. The complete distribution transformer loss evaluation equation is shown, with the nomenclature and definitions for the parameters provided. Tasks described are entitled: Establish Loss Evaluation Techniques; Compile System Cost Parameters; Compile Load Parameters and Loading Policies; Develop Transformer Cost/Performance Relationship; Define Characteristics of Multiple Efficiency Transformer Package; Minimize Life Cycle Cost Based on Single Efficiency Characteristic Transformer Design; Minimize Life Cycle Cost Based on Multiple Efficiency Characteristic Transformer Design; and Interpretation.

  2. Optimization of tomographic reconstruction workflows on geographically distributed resources.

    PubMed

    Bicer, Tekin; Gürsoy, Dogˇa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T

    2016-07-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can

  3. Optimization of tomographic reconstruction workflows on geographically distributed resources.

    PubMed

    Bicer, Tekin; Gürsoy, Dogˇa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T

    2016-07-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can

  4. Optimal shrinking of the distribution chain: the facilities delocation decision

    NASA Astrophysics Data System (ADS)

    Bhaumik, Pradip K.

    2010-03-01

    Closure of facilities is quite common among both business firms and public sector institutions like hospitals and schools. Although the facilities location problem has been studied extensively in the literature, not much attention has been paid to the closure of facilities. Unlike the location problem, the existing facilities and the corresponding network impose additional constraints on the closure or elimination of facilities and to highlight the difference between the two, we have called this the facilities delocation problem. In this article, we study a firm with an existing distribution network with known retailer and distributor locations that needs to downsize or shrink its distribution chain due to other business reasons. However, it is not a reallocation of demand nodes among the retained distributors. An important condition stipulates that all demand nodes must continue to get their supplies from their respective current distributors except when the current source itself is delocated, and only such uprooted demand nodes will be supplied by a different but one of the retained suppliers. We first describe the delocation problem and discuss its characteristics. We formulate the delocation problem as an integer linear programming problem and demonstrate its formulation and solution on a small problem. Finally, we discuss the solution and its implications for the distribution network.

  5. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    NASA Astrophysics Data System (ADS)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  6. Distributed Energy Resources On-Site Optimization for Commercial Buildings with Electric and Thermal Storage Technologies

    SciTech Connect

    Lacommare, Kristina S H; Stadler, Michael; Aki, Hirohisa; Firestone, Ryan; Lai, Judy; Marnay, Chris; Siddiqui, Afzal

    2008-05-15

    The addition of storage technologies such as flow batteries, conventional batteries, and heat storage can improve the economic as well as environmental attractiveness of on-site generation (e.g., PV, fuel cells, reciprocating engines or microturbines operating with or without CHP) and contribute to enhanced demand response. In order to examine the impact of storage technologies on demand response and carbon emissions, a microgrid's distributed energy resources (DER) adoption problem is formulated as a mixed-integer linear program that has the minimization of annual energy costs as its objective function. By implementing this approach in the General Algebraic Modeling System (GAMS), the problem is solved for a given test year at representative customer sites, such as schools and nursing homes, to obtain not only the level of technology investment, but also the optimal hourly operating schedules. This paper focuses on analysis of storage technologies in DER optimization on a building level, with example applications for commercial buildings. Preliminary analysis indicates that storage technologies respond effectively to time-varying electricity prices, i.e., by charging batteries during periods of low electricity prices and discharging them during peak hours. The results also indicate that storage technologies significantly alter the residual load profile, which can contribute to lower carbon emissions depending on the test site, its load profile, and its adopted DER technologies.

  7. Optimal exploitation of spatially distributed trophic resources and population stability

    USGS Publications Warehouse

    Basset, A.; Fedele, M.; DeAngelis, D.L.

    2002-01-01

    The relationships between optimal foraging of individuals and population stability are addressed by testing, with a spatially explicit model, the effect of patch departure behaviour on individual energetics and population stability. A factorial experimental design was used to analyse the relevance of the behavioural factor in relation to three factors that are known to affect individual energetics; i.e. resource growth rate (RGR), assimilation efficiency (AE), and body size of individuals. The factorial combination of these factors produced 432 cases, and 1000 replicate simulations were run for each case. Net energy intake rates of the modelled consumers increased with increasing RGR, consumer AE, and consumer body size, as expected. Moreover, through their patch departure behaviour, by selecting the resource level at which they departed from the patch, individuals managed to substantially increase their net energy intake rates. Population stability was also affected by the behavioural factors and by the other factors, but with highly non-linear responses. Whenever resources were limiting for the consumers because of low RGR, large individual body size or low AE, population density at the equilibrium was directly related to the patch departure behaviour; on the other hand, optimal patch departure behaviour, which maximised the net energy intake at the individual level, had a negative influence on population stability whenever resource availability was high for the consumers. The consumer growth rate (r) and numerical dynamics, as well as the spatial and temporal fluctuations of resource density, which were the proximate causes of population stability or instability, were affected by the behavioural factor as strongly or even more strongly than by the others factors considered here. Therefore, patch departure behaviour can act as a feedback control of individual energetics, allowing consumers to optimise a potential trade-off between short-term individual fitness

  8. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  9. A novel, optimized approach of voxel division for water vapor tomography

    NASA Astrophysics Data System (ADS)

    Yao, Yibin; Zhao, Qingzhi

    2016-03-01

    Water vapor information with highly spatial and temporal resolution can be acquired using Global Navigation Satellite System (GNSS) water vapor tomography technique. Usually, the targeted tomographic area is discretized into a number of voxels and the water vapor distribution can be reconstructed using a large number of GNSS signals which penetrate the entire tomographic area. Due to the influence of geographic distribution of receivers and geometric location of satellite constellation, many voxels located at the bottom and the side of research area are not crossed by signals, which would undermine the quality of tomographic result. To alleviate this problem, a novel, optimized approach of voxel division is here proposed which increases the number of voxels crossed by signals. On the vertical axis, a 3D water vapor profile is utilized, which is derived from radiosonde data for many years, to identify the maximum height of tomography space. On the horizontal axis, the total number of voxel crossed by signal is enhanced, based on the concept of non-uniform symmetrical division of horizontal voxels. In this study, tomographic experiments are implemented using GPS data from Hong Kong Satellite Positioning Reference Station Network, and tomographic result is compared with water vapor derived from radiosonde and European Center for Medium-Range Weather Forecasting (ECMWF). The result shows that the Integrated Water Vapour (IWV), RMS, and error distribution of the proposed approach are better than that of traditional method.

  10. Nonadiabatic approach to entanglement distribution over long distances

    SciTech Connect

    Razavi, Mohsen; Shapiro, Jeffrey H.

    2007-03-15

    Entanglement distribution between trapped-atom quantum memories, viz. single atoms in optical cavities, is addressed. In most scenarios, the rate of entanglement distribution depends on the efficiency with which the state of traveling single photons can be transferred to trapped atoms. This loading efficiency is analytically studied for two-level, V-level, {lambda}-level, and double-{lambda}-level atomic configurations by means of a system-reservoir approach. An off-resonant nonadiabatic approach to loading {lambda}-level trapped-atom memories is proposed, and the ensuing trade-offs between the atom-light coupling rate and input photon bandwidth for achieving a high loading probability are identified. The nonadiabatic approach allows a broad class of optical sources to be used, and in some cases it provides a higher system throughput than what can be achieved by adiabatic loading mechanisms. The analysis is extended to the case of two double-{lambda} trapped-atom memories illuminated by a polarization-entangled biphoton.

  11. A Distributed Flocking Approach for Information Stream Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  12. A maximum likelihood approach to jointly estimating seasonal and annual flood frequency distributions

    NASA Astrophysics Data System (ADS)

    Baratti, E.; Montanari, A.; Castellarin, A.; Salinas, J. L.; Viglione, A.; Blöschl, G.

    2012-04-01

    Flood frequency analysis is often used by practitioners to support the design of river engineering works, flood miti- gation procedures and civil protection strategies. It is often carried out at annual time scale, by fitting observations of annual maximum peak flows. However, in many cases one is also interested in inferring the flood frequency distribution for given intra-annual periods, for instance when one needs to estimate the risk of flood in different seasons. Such information is needed, for instance, when planning the schedule of river engineering works whose building area is in close proximity to the river bed for several months. A key issue in seasonal flood frequency analysis is to ensure the compatibility between intra-annual and annual flood probability distributions. We propose an approach to jointly estimate the parameters of seasonal and annual probability distribution of floods. The approach is based on the preliminary identification of an optimal number of seasons within the year,which is carried out by analysing the timing of flood flows. Then, parameters of intra-annual and annual flood distributions are jointly estimated by using (a) an approximate optimisation technique and (b) a formal maximum likelihood approach. The proposed methodology is applied to some case studies for which extended hydrological information is available at annual and seasonal scale.

  13. An Informatics Approach to Demand Response Optimization in Smart Grids

    SciTech Connect

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  14. Utility Theory for Evaluation of Optimal Process Condition of SAW: A Multi-Response Optimization Approach

    SciTech Connect

    Datta, Saurav; Biswas, Ajay; Bhaumik, Swapan; Majumdar, Gautam

    2011-01-17

    Multi-objective optimization problem has been solved in order to estimate an optimal process environment consisting of optimal parametric combination to achieve desired quality indicators (related to bead geometry) of submerged arc weld of mild steel. The quality indicators selected in the study were bead height, penetration depth, bead width and percentage dilution. Taguchi method followed by utility concept has been adopted to evaluate the optimal process condition achieving multiple objective requirements of the desired quality weld.

  15. Experiments with ROPAR, an approach for probabilistic analysis of the optimal solutions' robustness

    NASA Astrophysics Data System (ADS)

    Marquez, Oscar; Solomatine, Dimitri

    2016-04-01

    Robust optimization is defined as the search for solutions and performance results which remain reasonably unchanged when exposed to uncertain conditions such as natural variability in input variables, parameter drifts during operation time, model sensitivities and others [1]. In the present study we follow the approach named ROPAR (multi-objective robust optimization allowing for explicit analysis of robustness (see online publication [2]). Its main idea is in: a) sampling the vectors of uncertain factors; b) solving MOO problem for each of them obtaining multiple Pareto sets; c) analysing the statistical properties (distributions) of the subsets of these Pareto sets corresponding to different conditions (e.g. based on constraints formulated for the objective functions values of other system variables); d) selecting the robust solutions. The paper presents the results of experiments with the two case studies: 1) a benchmark function ZDT1 (with an uncertain factor) often used in algorithms comparisons, and 2) a problem of drainage network rehabilitation that uses SWMM hydrodynamic model (the rainfall is assumed to be an uncertain factor). This study is partly supported by the FP7 European Project WeSenseIt Citizen Water Observatory (www.http://wesenseit.eu/) and the CONACYT (Mexico's National Council of Science and Technology) supporting the PhD study of the first author. References [1] H.G.Beyer and B. Sendhoff. "Robust optimization - A comprehensive survey." Comput. Methods Appl. Mech. Engrg., 2007: 3190-3218. [2] D.P. Solomatine (2012). An approach to multi-objective robust optimization allowing for explicit analysis of robustness (ROPAR). UNESCO-IHE. Online publication. Web: https://www.unesco-ihe.org/sites/default/files/solomatine-ropar.pdf

  16. Progress towards a unified approach to entanglement distribution

    NASA Astrophysics Data System (ADS)

    Streltsov, Alexander; Augusiak, Remigiusz; Demianowicz, Maciej; Lewenstein, Maciej

    2015-07-01

    Entanglement distribution is key to the success of secure communication schemes based on quantum mechanics, and there is a strong need for an ultimate architecture able to overcome the limitations of recent proposals such as those based on entanglement percolation or quantum repeaters. In this work we provide a broad theoretical background for the development of such technologies. In particular, we investigate the question of whether entanglement distribution is more efficient if some amount of entanglement—or some amount of correlations in general—is available prior to the transmission stage of the protocol. We show that in the presence of noise the answer to this question strongly depends on the type of noise and on the way the entanglement is quantified. On the one hand, subadditive entanglement measures do not show an advantage of preshared correlations if entanglement is established via combinations of single-qubit Pauli channels. On the other hand, based on the superadditivity conjecture of distillable entanglement, we provide evidence that this phenomenon occurs for this measure. These results strongly suggest that sending one half of some pure entangled state down a noisy channel is the best strategy for any subadditive entanglement quantifier, thus paving the way to a unified approach for entanglement distribution which does not depend on the nature of noise. We also provide general bounds for entanglement distribution involving quantum discord and present a counterintuitive phenomenon of the advantage of arbitrarily little entangled states over maximally entangled ones, which may also occur for quantum channels relevant in experiments.

  17. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  18. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    PubMed

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-01

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation. PMID:26583559

  19. A Study on Machine Maintenance Scheduling Using Distributed Cooperative Approach

    NASA Astrophysics Data System (ADS)

    Tsujibe, Akihisa; Kaihara, Toshiya; Fujii, Nobutada; Nonaka, Youichi

    In this study, we propose a distributed cooperative scheduling method, and apply the method into a machine maintenance scheduling problem in re-entrant production systems. As one of the distributed cooperative scheduling methods, we focus on Lagrangian decomposition and coordination (LDC) method, and formulate the machine maintenance scheduling problem with LDC so as to improve computational efficiency by decomposing an original scheduling problem into several sub-problems. The derived solutions by solving the decomposed dual problem are converted into feasible solutions with a heuristic procedure applied in this study. The proposed approach regards maintenance as job with starting and finishing time constraints, so that product and maintenance schedule can realize proper maintenance operations without losing productivity. We show the effectiveness of the proposed method in several simulation experiments.

  20. Industrial Power Distribution System Reliability Assessment utilizing Markov Approach

    NASA Astrophysics Data System (ADS)

    Guzman-Rivera, Oscar R.

    A method to perform power system reliability analysis using Markov Approach, Reliability Block Diagrams and Fault Tree analysis has been presented. The Markov method we use is a state space model and is based on state diagrams generated for a one line industrial power distribution system. The Reliability block diagram (RBD) method is a graphical and calculation tool used to model the distribution power system of an industrial facility. Quantitative reliability estimations on this work are based on CARMS and Block Sim simulations as well as state space, RBD's and Failure Mode analyses. The power system reliability was assessed and the main contributors to power system reliability have been identified, both qualitatively and quantitatively. Methods to improve reliability have also been provided including redundancies and protection systems that might be added to the system in order to improve reliability.

  1. A distributed approach to alarm management in chronic kidney disease.

    PubMed

    Estudillo-Valderrama, Miguel A; Talaminos-Barroso, Alejandro; Roa, Laura M; Naranjo-Hernández, David; Reina-Tosina, Javier; Aresté-Fosalba, Nuria; Milán-Martín, José A

    2014-11-01

    This paper presents the feasibility study of using a distributed approach for the management of alarms from chronic kidney disease patients. In a first place, the key issues regarding alarm definition, classification, and prioritization according to available normalization efforts are analyzed for the main scenarios addressed in hemodialysis. Then, the middleware proposed for alarm management is described, which follows the publish/subscribe pattern, and supports the Object Management Group data distribution service (DDS) standard. This standard facilitates the real-time monitoring of the exchanged information, as well as the scalability and interoperability of the solution developed regarding the different stakeholders and resources involved. Finally, the results section shows, through the proof of concept studied, the viability of DDS for the activation of emergency protocols in terms of alarm prioritization and personalization, as well as some remarks about security, privacy, and real-time communication performance.

  2. Multiphysics simulation for the optimization of optical nanoantennas working as distributed bolometers in the infrared

    NASA Astrophysics Data System (ADS)

    Cuadrado, Alexander; Alda, Javier; González, Francisco Javier

    2013-01-01

    The electric currents induced by infrared radiation incident on optical antennas and resonant structures increase their temperature through Joule heating as well as change their electric resistance through the bolometric effect. As the thermo-electric mechanism exists throughout a distributed bolometer, a multiphysics approach was adopted to analyze thermal, electrical, and electromagnetic effects in a dipole antenna functioning as a resonant distributed bolometer. The finite element method was used for electromagnetic and thermal considerations. The results showed that bolometric performance depends on the choice of materials, the geometry of the resonant structure, the thickness of an insulating layer, and the characteristics of a bias circuit. Materials with large skin depth and small thermal conductivity are desirable. The thickness of the SiO insulating layer should not exceed 1.2 μm, and a current source for the bias circuit enhances performance. An optimized device designed with the previously stated design rules provides a response increase of two orders of magnitude compared to previously reported devices using the same dipole geometry.

  3. Optimization of Comminution Circuit Throughput and Product Size Distribution by Simulation and Control

    SciTech Connect

    S. K. Kawatra; T. C. Eisele; T. Weldum; D. Larsen; R. Mariani; J. Pletka

    2005-03-31

    The goal of this project is to improve energy efficiency of industrial crushing and grinding operations (comminution). Mathematical models of the comminution process are being used to study methods for optimizing the product size distribution, so that the amount of excessively fine material produced can be minimized. The goal is to save energy by reducing the amount of material that is ground below the target size, while simultaneously reducing the quantity of materials wasted as ''slimes'' that are too fine to be useful. This is being accomplished by mathematical modeling of the grinding circuits to determine how to correct this problem. The approaches taken included (1) Modeling of the circuit to determine process bottlenecks that restrict flow rates in one area while forcing other parts of the circuit to overgrind the material; (2) Modeling of hydrocyclones to determine the mechanisms responsible for retaining fine, high-density particles in the circuit until they are overground, and improving existing models to accurately account for this behavior; and (3) Evaluation of advanced technologies to improve comminution efficiency and produce sharper product size distributions with less overgrinding.

  4. OPTIMIZATION OF COMMINUTION CIRCUIT THROUGHPUT AND PRODUCT SIZE DISTRIBUTION BY SIMULATION AND CONTROL

    SciTech Connect

    T.C. Eisele; S.K. Kawatra; H.J. Walqui

    2004-10-01

    The goal of this project is to improve energy efficiency of industrial crushing and grinding operations (comminution). Mathematical models of the comminution process are being used to study methods for optimizing the product size distribution, so that the amount of excessively fine material produced can be minimized. The goal is to save energy by reducing the amount of material that is ground below the target size, while simultaneously reducing the quantity of materials wasted as ''slimes'' that are too fine to be useful. This is being accomplished by mathematical modeling of the grinding circuits to determine how to correct this problem. The approaches taken included (1) Modeling of the circuit to determine process bottlenecks that restrict flowrates in one area while forcing other parts of the circuit to overgrind the material; (2) Modeling of hydrocyclones to determine the mechanisms responsible for retaining fine, high-density particles in the circuit until they are overground, and improving existing models to accurately account for this behavior; and (3) Evaluation of advanced technologies to improve comminution efficiency and produce sharper product size distributions with less overgrinding.

  5. A Novel Paradigm for Computer-Aided Design: TRIZ-Based Hybridization of Topologically Optimized Density Distributions

    NASA Astrophysics Data System (ADS)

    Cardillo, A.; Cascini, G.; Frillici, F. S.; Rotini, F.

    In a recent project the authors have proposed the adoption of Optimization Systems [1] as a bridging element between Computer-Aided Innovation (CAI) and PLM to identify geometrical contradictions [2], a particular case of the TRIZ physical contradiction [3]. A further development of the research [4] has revealed that the solutions obtained from several topological optimizations can be considered as elementary customized modeling features for a specific design task. The topology overcoming the arising geometrical contradiction can be obtained through a manipulation of the density distributions constituting the conflicting pair. Already two strategies of density combination have been identified as capable to solve geometrical contradictions and several others are under extended testing. The paper illustrates the most recent results of the ongoing research mainly related to the extension of the algorithms from 2D to 3D design spaces. The whole approach is clarified by means of two detailed examples, where the proposed technique is compared with classical multi-goal optimization.

  6. Determination and optimization of spatial samples for distributed measurements.

    SciTech Connect

    Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  7. Migration efficiency ratios and the optimal distribution of population.

    PubMed

    Gallaway, L; Vedder, R

    1985-01-01

    The authors present a theoretical description of the migration process and criticize the conventional interpretation of the migration efficiency ratio, which is defined as the ratio of the net number of moves of individuals between areas to the gross number of moves that take place. "The conventional interpretation of the migration efficiency ratio is that the closer it lies to zero the less efficient the migration process....However, [the authors] feel that this is a somewhat misleading conception of the notion of efficiency in migration in that it emphasizes the physical efficiency of the migration process rather than focusing on the contribution of migration to a socially efficient allocation of population. Thus, to redirect attention, [they] have chosen to judge migration efficiency on the basis of its contribution to producing an equilibrium population distribution." The focus is on internal migration.

  8. Optimal Distribution and Utilization of Donated Human Breast Milk

    PubMed Central

    Simpson, Judith H.; McKerracher, Lorna; Cooper, Andrew; Barnett, Debbie; Gentles, Emma; Cairns, Lorraine; Gerasimidis, Konstantinos

    2016-01-01

    Background: The nutritional content of donated expressed breast milk (DEBM) is variable. Using DEBM to provide for the energy requirements of neonates is challenging. Objective: The authors hypothesized that a system of DEBM energy content categorization and distribution would improve energy intake from DEBM. Methods: We compared infants’ actual cumulative energy intake with projected energy intake, had they been fed using our proposed system. Eighty-five milk samples were ranked by energy content. The bottom, middle, and top tertiles were classified as red, amber, and green energy content categories, respectively. Data on 378 feeding days from 20 babies who received this milk were analyzed. Total daily intake of DEBM was calculated in mL/kg/day and similarly ranked. Infants received red energy content milk, with DEBM intake in the bottom daily volume intake tertile; amber energy content milk, with intake in the middle daily volume intake tertile; and green energy content milk when intake reached the top daily volume intake tertile. Results: Actual median cumulative energy intake from DEBM was 1612 (range, 15-11 182) kcal. Using DEBM with the minimum energy content from the 3 DEBM energy content categories, median projected cumulative intake was 1670 (range 13-11 077) kcal, which was not statistically significant (P = .418). Statistical significance was achieved using DEBM with the median and maximum energy content from each energy content category, giving median projected cumulative intakes of 1859 kcal (P = .0006) and 2280 kcal (P = .0001), respectively. Conclusion: Cumulative energy intake from DEBM can be improved by categorizing and distributing milk according to energy content. PMID:27364932

  9. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  10. An optimized web-based approach for collaborative stereoscopic medical visualization

    PubMed Central

    Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C

    2013-01-01

    Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three

  11. Reusable Component Model Development Approach for Parallel and Distributed Simulation

    PubMed Central

    Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng

    2014-01-01

    Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751

  12. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  13. Optimal blood sampling time windows for parameter estimation using a population approach: design of a phase II clinical trial.

    PubMed

    Chenel, Marylore; Ogungbenro, Kayode; Duval, Vincent; Laveille, Christian; Jochemsen, Roeline; Aarons, Leon

    2005-12-01

    The objective of this paper is to determine optimal blood sampling time windows for the estimation of pharmacokinetic (PK) parameters by a population approach within the clinical constraints. A population PK model was developed to describe a reference phase II PK dataset. Using this model and the parameter estimates, D-optimal sampling times were determined by optimising the determinant of the population Fisher information matrix (PFIM) using PFIM_ _M 1.2 and the modified Fedorov exchange algorithm. Optimal sampling time windows were then determined by allowing the D-optimal windows design to result in a specified level of efficiency when compared to the fixed-times D-optimal design. The best results were obtained when K(a) and IIV on K(a) were fixed. Windows were determined using this approach assuming 90% level of efficiency and uniform sample distribution. Four optimal sampling time windows were determined as follow: at trough between 22 h and new drug administration; between 2 and 4 h after dose for all patients; and for 1/3 of the patients only 2 sampling time windows between 4 and 10 h after dose, equal to [4 h-5 h 05] and [9 h 10-10 h]. This work permitted the determination of an optimal design, with suitable sampling time windows which was then evaluated by simulations. The sampling time windows will be used to define the sampling schedule in a prospective phase II study.

  14. TH-C-BRD-10: An Evaluation of Three Robust Optimization Approaches in IMPT Treatment Planning

    SciTech Connect

    Cao, W; Randeniya, S; Mohan, R; Zaghian, M; Kardar, L; Lim, G; Liu, W

    2014-06-15

    Purpose: Various robust optimization approaches have been proposed to ensure the robustness of intensity modulated proton therapy (IMPT) in the face of uncertainty. In this study, we aim to investigate the performance of three classes of robust optimization approaches regarding plan optimality and robustness. Methods: Three robust optimization models were implemented in our in-house IMPT treatment planning system: 1) L2 optimization based on worst-case dose; 2) L2 optimization based on minmax objective; and 3) L1 optimization with constraints on all uncertain doses. The first model was solved by a L-BFGS algorithm; the second was solved by a gradient projection algorithm; and the third was solved by an interior point method. One nominal scenario and eight maximum uncertainty scenarios (proton range over and under 3.5%, and setup error of 5 mm for x, y, z directions) were considered in optimization. Dosimetric measurements of optimized plans from the three approaches were compared for four prostate cancer patients retrospectively selected at our institution. Results: For the nominal scenario, all three optimization approaches yielded the same coverage to the clinical treatment volume (CTV) and the L2 worst-case approach demonstrated better rectum and bladder sparing than others. For the uncertainty scenarios, the L1 approach resulted in the most robust CTV coverage against uncertainties, while the plans from L2 worst-case were less robust than others. In addition, we observed that the number of scanning spots with positive MUs from the L2 approaches was approximately twice as many as that from the L1 approach. This indicates that L1 optimization may lead to more efficient IMPT delivery. Conclusion: Our study indicated that the L1 approach best conserved the target coverage in the face of uncertainty but its resulting OAR sparing was slightly inferior to other two approaches.

  15. Flower pollination algorithm: A novel approach for multiobjective optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She; Karamanoglu, Mehmet; He, Xingshi

    2014-09-01

    Multiobjective design optimization problems require multiobjective optimization techniques to solve, and it is often very challenging to obtain high-quality Pareto fronts accurately. In this article, the recently developed flower pollination algorithm (FPA) is extended to solve multiobjective optimization problems. The proposed method is used to solve a set of multiobjective test functions and two bi-objective design benchmarks, and a comparison of the proposed algorithm with other algorithms has been made, which shows that the FPA is efficient with a good convergence rate. Finally, the importance for further parametric studies and theoretical analysis is highlighted and discussed.

  16. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  17. Fast engineering optimization: A novel highly effective control parameterization approach for industrial dynamic processes.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao

    2015-09-01

    Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes.

  18. A Simultaneous Approach to Optimizing Treatment Assignments with Mastery Scores. Research Report 89-5.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    An approach to simultaneous optimization of assignments of subjects to treatments followed by an end-of-mastery test is presented using the framework of Bayesian decision theory. Focus is on demonstrating how rules for the simultaneous optimization of sequences of decisions can be found. The main advantages of the simultaneous approach, compared…

  19. Parameter identification of a distributed runoff model by the optimization software Colleo

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Anai, Hirokazu; Iwami, Yoichi

    2015-04-01

    The introduction of Colleo (Collection of Optimization software) is presented and case studies of parameter identification for a distributed runoff model are illustrated. In order to calculate discharge of rivers accurately, a distributed runoff model becomes widely used to take into account various land usage, soil-type and rainfall distribution. Feasibility study of parameter optimization is desired to be done in two steps. The first step is to survey which optimization algorithms are suitable for the problems of interests. The second step is to investigate the performance of the specific optimization algorithm. Most of the previous studies seem to focus on the second step. This study will focus on the first step and complement the previous studies. Many optimization algorithms have been proposed in the computational science field and a large number of optimization software have been developed and opened to the public with practically applicable performance and quality. It is well known that it is important to use suitable algorithms for the problems to obtain good optimization results efficiently. In order to achieve algorithm comparison readily, optimization software is needed with which performance of many algorithms can be compared and can be connected to various simulation software. Colleo is developed to satisfy such needs. Colleo provides a unified user interface to several optimization software such as pyOpt, NLopt, inspyred and R and helps investigate the suitability of optimization algorithms. 74 different implementations of optimization algorithms, Nelder-Mead, Particle Swarm Optimization and Genetic Algorithm, are available with Colleo. The effectiveness of Colleo was demonstrated with the cases of flood events of the Gokase River basin in Japan (1820km2). From 2002 to 2010, there were 15 flood events, in which the discharge exceeded 1000m3/s. The discharge was calculated with the PWRI distributed hydrological model developed by ICHARM. The target

  20. A modal approach to modeling spatially distributed vibration energy dissipation.

    SciTech Connect

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  1. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  2. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  3. A functional approach to geometry optimization of complex systems

    NASA Astrophysics Data System (ADS)

    Maslen, P. E.

    A quadratically convergent procedure is presented for the geometry optimization of complex systems, such as biomolecules and molecular complexes. The costly evaluation of the exact Hessian is avoided by expanding the density functional to second order in both nuclear and electronic variables, and then searching for the minimum of the quadratic functional. The dependence of the functional on the choice of nuclear coordinate system is described, and illustrative geometry optimizations using Cartesian and internal coordinates are presented for Taxol™.

  4. An inverse dynamics approach to trajectory optimization for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.

  5. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  6. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  7. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  8. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  9. RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE

    SciTech Connect

    Matthews, Daniel J.; Newman, Jeffrey A. E-mail: janewman@pitt.ed

    2010-09-20

    Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to {approx}0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alone Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that

  10. The adaptive approach for storage assignment by mining data of warehouse management system for distribution centres

    NASA Astrophysics Data System (ADS)

    Ming-Huang Chiang, David; Lin, Chia-Ping; Chen, Mu-Chen

    2011-05-01

    Among distribution centre operations, order picking has been reported to be the most labour-intensive activity. Sophisticated storage assignment policies adopted to reduce the travel distance of order picking have been explored in the literature. Unfortunately, previous research has been devoted to locating entire products from scratch. Instead, this study intends to propose an adaptive approach, a Data Mining-based Storage Assignment approach (DMSA), to find the optimal storage assignment for newly delivered products that need to be put away when there is vacant shelf space in a distribution centre. In the DMSA, a new association index (AIX) is developed to evaluate the fitness between the put away products and the unassigned storage locations by applying association rule mining. With AIX, the storage location assignment problem (SLAP) can be formulated and solved as a binary integer programming. To evaluate the performance of DMSA, a real-world order database of a distribution centre is obtained and used to compare the results from DMSA with a random assignment approach. It turns out that DMSA outperforms random assignment as the number of put away products and the proportion of put away products with high turnover rates increase.

  11. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the optimization of the lift distribution across the wing of an aircraft in formation flight. The usual elliptical distribution is no longer optimal for the trailing wing in the formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  12. Optimal investment and scheduling of distributed energy resources with uncertainty in electric vehicles driving schedules

    SciTech Connect

    Cardoso, Goncalo; Stadler, Michael; Bozchalui, Mohammed C.; Sharma, Ratnesh; Marnay, Chris; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-12-06

    The large scale penetration of electric vehicles (EVs) will introduce technical challenges to the distribution grid, but also carries the potential for vehicle-to-grid services. Namely, if available in large enough numbers, EVs can be used as a distributed energy resource (DER) and their presence can influence optimal DER investment and scheduling decisions in microgrids. In this work, a novel EV fleet aggregator model is introduced in a stochastic formulation of DER-CAM [1], an optimization tool used to address DER investment and scheduling problems. This is used to assess the impact of EV interconnections on optimal DER solutions considering uncertainty in EV driving schedules. Optimization results indicate that EVs can have a significant impact on DER investments, particularly if considering short payback periods. Furthermore, results suggest that uncertainty in driving schedules carries little significance to total energy costs, which is corroborated by results obtained using the stochastic formulation of the problem.

  13. A robust approach to chance constrained optimal power flow with renewable generation

    DOE PAGES

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2015-11-20

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less

  14. A robust approach to chance constrained optimal power flow with renewable generation

    SciTech Connect

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2015-11-20

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved using a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.

  15. Optimal cloning of qubits given by an arbitrary axisymmetric distribution on the Bloch sphere

    SciTech Connect

    Bartkiewicz, Karol; Miranowicz, Adam

    2010-10-15

    We find an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector with the highest fidelity. The process is referred to as phase-independent cloning in contrast to the standard phase-covariant cloning for which an input qubit state is a priori better known. We assume that the information about the input state is encoded in an arbitrary axisymmetric distribution (phase function) on the Bloch sphere of the cloned qubits. We find analytical expressions describing the optimal cloning transformation and fidelity of the clones. As an illustration, we analyze cloning of qubit state described by the von Mises-Fisher and Brosseau distributions. Moreover, we show that the optimal phase-independent cloning machine can be implemented by modifying the mirror phase-covariant cloning machine for which quantum circuits are known.

  16. Innovative Meta-Heuristic Approach Application for Parameter Estimation of Probability Distribution Model

    NASA Astrophysics Data System (ADS)

    Lee, T. S.; Yoon, S.; Jeong, C.

    2012-12-01

    The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the

  17. Distributed and/or grid-oriented approach to BTeV data analysis

    SciTech Connect

    Joel N. Butler

    2002-12-23

    The BTeV collaboration will record approximately 2 petabytes of raw data per year. It plans to analyze this data using the distributed resources of the collaboration as well as dedicated resources, primarily residing in the very large BTeV trigger farm, and resources accessible through the developing world-wide data grid. The data analysis system is being designed from the very start with this approach in mind. In particular, we plan a fully disk-based data storage system with multiple copies of the data distributed across the collaboration to provide redundancy and to optimize access. We will also position ourself to take maximum advantage of shared systems, as well as dedicated systems, at our collaborating institutions.

  18. IPIP: A new approach to inverse planning for HDR brachytherapy by directly optimizing dosimetric indices

    SciTech Connect

    Siauw, Timmy; Cunha, Adam; Atamtuerk, Alper; Hsu, I-Chow; Pouliot, Jean; Goldberg, Ken

    2011-07-15

    Purpose: Many planning methods for high dose rate (HDR) brachytherapy require an iterative approach. A set of computational parameters are hypothesized that will give a dose plan that meets dosimetric criteria. A dose plan is computed using these parameters, and if any dosimetric criteria are not met, the process is iterated until a suitable dose plan is found. In this way, the dose distribution is controlled by abstract parameters. The purpose of this study is to develop a new approach for HDR brachytherapy by directly optimizing the dose distribution based on dosimetric criteria. Methods: The authors developed inverse planning by integer program (IPIP), an optimization model for computing HDR brachytherapy dose plans and a fast heuristic for it. They used their heuristic to compute dose plans for 20 anonymized prostate cancer image data sets from patients previously treated at their clinic database. Dosimetry was evaluated and compared to dosimetric criteria. Results: Dose plans computed from IPIP satisfied all given dosimetric criteria for the target and healthy tissue after a single iteration. The average target coverage was 95%. The average computation time for IPIP was 30.1 s on an Intel(R) Core{sup TM}2 Duo CPU 1.67 GHz processor with 3 Gib RAM. Conclusions: IPIP is an HDR brachytherapy planning system that directly incorporates dosimetric criteria. The authors have demonstrated that IPIP has clinically acceptable performance for the prostate cases and dosimetric criteria used in this study, in both dosimetry and runtime. Further study is required to determine if IPIP performs well for a more general group of patients and dosimetric criteria, including other cancer sites such as GYN.

  19. An inverse method for computation of structural stiffness distributions of aeroelastically optimized wings

    NASA Astrophysics Data System (ADS)

    Schuster, David M.

    1993-04-01

    An inverse method has been developed to compute the structural stiffness properties of wings given a specified wing loading and aeroelastic twist distribution. The method directly solves for the bending and torsional stiffness distribution of the wing using a modal representation of these properties. An aeroelastic design problem involving the use of a computational aerodynamics method to optimize the aeroelastic twist distribution of a tighter wing operating at maneuver flight conditions is used to demonstrate the application of the method. This exercise verifies the ability of the inverse scheme to accurately compute the structural stiffness distribution required to generate a specific aeroelastic twist under a specified aeroelastic load.

  20. A majorization-minimization approach to design of power distribution networks

    SciTech Connect

    Johnson, Jason K; Chertkov, Michael

    2010-01-01

    We consider optimization approaches to design cost-effective electrical networks for power distribution. This involves a trade-off between minimizing the power loss due to resistive heating of the lines and minimizing the construction cost (modeled by a linear cost in the number of lines plus a linear cost on the conductance of each line). We begin with a convex optimization method based on the paper 'Minimizing Effective Resistance of a Graph' [Ghosh, Boyd & Saberi]. However, this does not address the Alternating Current (AC) realm and the combinatorial aspect of adding/removing lines of the network. Hence, we consider a non-convex continuation method that imposes a concave cost of the conductance of each line thereby favoring sparser solutions. By varying a parameter of this penalty we extrapolate from the convex problem (with non-sparse solutions) to the combinatorial problem (with sparse solutions). This is used as a heuristic to find good solutions (local minima) of the non-convex problem. To perform the necessary non-convex optimization steps, we use the majorization-minimization algorithm that performs a sequence of convex optimizations obtained by iteratively linearizing the concave part of the objective. A number of examples are presented which suggest that the overall method is a good heuristic for network design. We also consider how to obtain sparse networks that are still robust against failures of lines and/or generators.

  1. Metamodeling and the Critic-based approach to multi-level optimization.

    PubMed

    Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J

    2012-08-01

    Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach. PMID:22386785

  2. Metamodeling and the Critic-based approach to multi-level optimization.

    PubMed

    Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J

    2012-08-01

    Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach.

  3. Rapid Optimal SPH Particle Distributions in Spherical Geometries for Creating Astrophysical Initial Conditions

    NASA Astrophysics Data System (ADS)

    Raskin, Cody; Owen, J. Michael

    2016-04-01

    Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such as planets with core-mantle boundaries.

  4. A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation

    SciTech Connect

    Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin

    2016-01-01

    This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.

  5. Optimal probabilistic cloning of two linearly independent states with arbitrary probability distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Wen; Rui, Pinshu; Zhang, Ziyun; Liao, Yanlin

    2016-02-01

    We investigate the probabilistic quantum cloning (PQC) of two states with arbitrary probability distribution. The optimal success probabilities are worked out for 1→ 2 PQC of the two states. The results show that the upper bound on the success probabilities of PQC in Qiu (J Phys A 35:6931-6937, 2002) cannot be reached in general. With the optimal success probabilities, we design simple forms of 1→ 2 PQC and work out the unitary transformation needed in the PQC processes. The optimal success probabilities for 1→ 2 PQC are also generalized to the M→ N PQC case.

  6. Economic consideration of optimal vaccination distribution for epidemic Spreads in complex networks

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Suzuki, Hideyuki; Aihara, Kazuyuki

    2013-02-01

    The main concern of epidemiological modeling is to implement an economical vaccine allocation to the population. Here, we investigate the optimal vaccination allocation in complex networks. We find that the optimal vaccine coverage depends not only on the relative cost of treatment to vaccination but also on the vaccine efficacy. Especially with a high cost of treatment, nodes with high degree are prioritized to vaccinate. These results may help us understand factors that may impact the optimal vaccination distribution in the control of epidemic dynamics.

  7. Distributed and parallel approach for handle and perform huge datasets

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.

  8. Distributed parameter approach to the dynamics of complex biological processes

    SciTech Connect

    Lee, T.T.; Wang, F.Y.; Newell, R.B.

    1999-10-01

    Modeling and simulation of a complex biological process for the removal of nutrients (nitrogen and phosphorus) from municipal wastewater are addressed. The model developed in this work employs a distributed-parameter approach to describe the behavior of components within three different bioreaction zones and the behavior of sludge in the anaerobic zone and soluble phosphate in the aerobic zone in two experiments. Good results are achieved despite the apparent plant-model mismatch, such as uncertainties with the behavior of phosphorus-accumulating organisms. Validation of the proposed secondary-settler model shows that it is superior to two state-of-the-art models in terms of the sum of the square relative errors.

  9. A Distributed Artificial Intelligence Approach To Object Identification And Classification

    NASA Astrophysics Data System (ADS)

    Sikka, Digvijay I.; Varshney, Pramod K.; Vannicola, Vincent C.

    1989-09-01

    This paper presents an application of Distributed Artificial Intelligence (DAI) tools to the data fusion and classification problem. Our approach is to use a blackboard for information management and hypothe-ses formulation. The blackboard is used by the knowledge sources (KSs) for sharing information and posting their hypotheses on, just as experts sitting around a round table would do. The present simulation performs classification of an Aircraft(AC), after identifying it by its features, into disjoint sets (object classes) comprising of the five commercial ACs; Boeing 747, Boeing 707, DC10, Concord and Boeing 727. A situation data base is characterized by experimental data available from the three levels of expert reasoning. Ohio State University ElectroScience Laboratory provided this experimental data. To validate the architecture presented, we employ two KSs for modeling the sensors, aspect angle polarization feature and the ellipticity data. The system has been implemented on Symbolics 3645, under Genera 7.1, in Common LISP.

  10. Using R for Global Optimization of a Fully-distributed Hydrologic Model at Continental Scale

    NASA Astrophysics Data System (ADS)

    Zambrano-Bigiarini, M.; Zajac, Z.; Salamon, P.

    2013-12-01

    Nowadays hydrologic model simulations are widely used to better understand hydrologic processes and to predict extreme events such as floods and droughts. In particular, the spatially distributed LISFLOOD model is currently used for flood forecasting at Pan-European scale, within the European Flood Awareness System (EFAS). Several model parameters can not be directly measured, and they need to be estimated through calibration, in order to constrain simulated discharges to their observed counterparts. In this work we describe how the free software 'R' has been used as a single environment to pre-process hydro-meteorological data, to carry out global optimization, and to post-process calibration results in Europe. Historical daily discharge records were pre-processed for 4062 stream gauges, with different amount and distribution of data in each one of them. The hydroTSM, raster and sp R packages were used to select ca. 700 stations with an adequate spatio-temporal coverage. Selected stations span a wide range of hydro-climatic characteristics, from arid and ET-dominated watersheds in the Iberian Peninsula to snow-dominated watersheds in Scandinavia. Nine parameters were selected to be calibrated based on previous expert knowledge. Customized R scripts were used to extract observed time series for each catchment and to prepare the input files required to fully set up the calibration thereof. The hydroPSO package was then used to carry out a single-objective global optimization on each selected catchment, by using the Standard Particle Swarm 2011 (SPSO-2011) algorithm. Among the many goodness-of-fit measures available in the hydroGOF package, the Nash-Sutcliffe efficiency was used to drive the optimization. User-defined functions were developed for reading model outputs and passing them to the calibration engine. The long computational time required to finish the calibration at continental scale was partially alleviated by using 4 multi-core machines (with both GNU

  11. Percutaneous approach to the upper thoracic spine: optimal patient positioning.

    PubMed

    Bayley, Edward; Clamp, Jonathan; Boszczyk, Bronek M

    2009-12-01

    Percutaneous access to the upper thoracic vertebrae under fluoroscopic guidance is challenging. We describe our positioning technique facilitating optimal visualisation of the high thoracic vertebrae in the prone position. This allows safe practice of kyphoplasty, vertebroplasty and biopsy throughout the upper thoracic spine.

  12. A Simulation of Optimal Foraging: The Nuts and Bolts Approach.

    ERIC Educational Resources Information Center

    Thomson, James D.

    1980-01-01

    Presents a mechanical model for an ecology laboratory that introduces the concept of optimal foraging theory. Describes the physical model which includes a board studded with protruding machine bolts that simulate prey, and blindfolded students who simulate either generalist or specialist predator types. Discusses the theoretical model and data…

  13. Electron energy distribution in a dusty plasma: analytical approach.

    PubMed

    Denysenko, I B; Kersten, H; Azarenkov, N A

    2015-09-01

    Analytical expressions describing the electron energy distribution function (EEDF) in a dusty plasma are obtained from the homogeneous Boltzmann equation for electrons. The expressions are derived neglecting electron-electron collisions, as well as transformation of high-energy electrons into low-energy electrons at inelastic electron-atom collisions. At large electron energies, the quasiclassical approach for calculation of the EEDF is applied. For the moderate energies, we account for inelastic electron-atom collisions in the dust-free case and both inelastic electron-atom and electron-dust collisions in the dusty plasma case. Using these analytical expressions and the balance equation for dust charging, the electron energy distribution function, the effective electron temperature, the dust charge, and the dust surface potential are obtained for different dust radii and densities, as well as for different electron densities and radio-frequency (rf) field amplitudes and frequencies. The dusty plasma parameters are compared with those calculated numerically by a finite-difference method taking into account electron-electron collisions and the transformation of high-energy electrons at inelastic electron-neutral collisions. It is shown that the analytical expressions can be used for calculation of the EEDF and dusty plasma parameters at typical experimental conditions, in particular, in the positive column of a direct-current glow discharge and in the case of an rf plasma maintained by an electric field with frequency f=13.56MHz.

  14. Current Approaches for Improving Intratumoral Accumulation and Distribution of Nanomedicines

    PubMed Central

    Durymanov, Mikhail O; Rosenkranz, Andrey A; Sobolev, Alexander S

    2015-01-01

    The ability of nanoparticles and macromolecules to passively accumulate in solid tumors and enhance therapeutic effects in comparison with conventional anticancer agents has resulted in the development of various multifunctional nanomedicines including liposomes, polymeric micelles, and magnetic nanoparticles. Further modifications of these nanoparticles have improved their characteristics in terms of tumor selectivity, circulation time in blood, enhanced uptake by cancer cells, and sensitivity to tumor microenvironment. These “smart” systems have enabled highly effective delivery of drugs, genes, shRNA, radioisotopes, and other therapeutic molecules. However, the resulting therapeutically relevant local concentrations of anticancer agents are often insufficient to cause tumor regression and complete elimination. Poor perfusion of inner regions of solid tumors as well as vascular barrier, high interstitial fluid pressure, and dense intercellular matrix are the main intratumoral barriers that impair drug delivery and impede uniform distribution of nanomedicines throughout a tumor. Here we review existing methods and approaches for improving tumoral uptake and distribution of nano-scaled therapeutic particles and macromolecules (i.e. nanomedicines). Briefly, these strategies include tuning physicochemical characteristics of nanomedicines, modulating physiological state of tumors with physical impacts or physiologically active agents, and active delivery of nanomedicines using cellular hitchhiking. PMID:26155316

  15. Optimal filters - A unified approach for SNR and PCE. [Peak-To-Correlation-Energy

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1993-01-01

    A unified approach for a general metric that encompasses both the signal-to-noise ratio (SNR) and the peak-to-correlation (PCE) ratio in optical correlators is described. In this approach, the connection between optimizing SNR and optimizing PCE is achieved by considering a metric in which the central correlation irradiance is divided by the total energy of the correlation plane. The peak-to-total energy (PTE) is shown to be optimized similarly to SNR and PCE. Since PTE is a function of the search values G and beta, the optimal filter is determined with only a two-dimensional search.

  16. A Distributed Trajectory-Oriented Approach to Managing Traffic Complexity

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Wing, David J.; Vivona, Robert; Garcia-Chico, Jose-Luis

    2007-01-01

    In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which ground-based service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. While its architecture becomes more distributed, the goal of the Air Traffic Management (ATM) system remains to achieve objectives such as maintaining safety and efficiency. It is, therefore, critical to design appropriate control elements to ensure that aircraft and groundbased actions result in achieving these objectives without unduly restricting user-preferred trajectories. This paper presents a trajectory-oriented approach containing two such elements. One is a trajectory flexibility preservation function, by which aircraft plan their trajectories to preserve flexibility to accommodate unforeseen events. And the other is a trajectory constraint minimization function by which ground-based agents, in collaboration with air-based agents, impose just-enough restrictions on trajectories to achieve ATM objectives, such as separation assurance and flow management. The underlying hypothesis is that preserving trajectory flexibility of each individual aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by minimizing constraints without jeopardizing the intended ATM objectives. The paper presents conceptually how the two functions operate in a distributed control architecture that includes self separation. The paper illustrates the concept through hypothetical scenarios involving conflict resolution and flow management. It presents a functional analysis of the interaction and information flow between the functions. It also presents an analytical framework for defining metrics and developing methods to preserve trajectory flexibility and

  17. Optimizing technology investments: a broad mission model approach

    NASA Technical Reports Server (NTRS)

    Shishko, R.

    2003-01-01

    A long-standing problem in NASA is how to allocate scarce technology development resources across advanced technologies in order to best support a large set of future potential missions. Within NASA, two orthogonal paradigms have received attention in recent years: the real-options approach and the broad mission model approach. This paper focuses on the latter.

  18. A genetic optimization approach for isolating translational efficiency bias.

    PubMed

    Raiford, Douglas W; Krane, Dan E; Doom, Travis E W; Raymer, Michael L

    2011-01-01

    The study of codon usage bias is an important research area that contributes to our understanding of molecular evolution, phylogenetic relationships, respiratory lifestyle, and other characteristics. Translational efficiency bias is perhaps the most well-studied codon usage bias, as it is frequently utilized to predict relative protein expression levels. We present a novel approach to isolating translational efficiency bias in microbial genomes. There are several existent methods for isolating translational efficiency bias. Previous approaches are susceptible to the confounding influences of other potentially dominant biases. Additionally, existing approaches to identifying translational efficiency bias generally require both genomic sequence information and prior knowledge of a set of highly expressed genes. This novel approach provides more accurate results from sequence information alone by resisting the confounding effects of other biases. We validate this increase in accuracy in isolating translational efficiency bias on 10 microbial genomes, five of which have proven particularly difficult for existing approaches due to the presence of strong confounding biases.

  19. Optimal Distribution of Biofuel Feedstocks within Marginal Land in the USA

    NASA Astrophysics Data System (ADS)

    Jaiswal, D.

    2015-12-01

    The United States can have 43 to 123 Mha of marginal land to grow second generation biofuel feedstocks. A physiological and biophysical model (BioCro) was run using 30 yr climate data (NARR) and SSURGO soil data for the conterminous United Stated to simulate growth of miscanthus, switchgrass, sugarcane, and short rotation coppice. Overlay analyses of the regional maps of predicted yields and marginal land suggest maximum availability of 0.33, 1.15, 1.13, and 1.89 PG year-1 of biomass from sugarcane, willow, switchgrass, and miscanthus, respectively. Optimal distribution of these four biofuel feedstocks within the marginal land in the USA can provide up to 2 PG year-1 of biomass for the production of second generation of biofuel without competing for crop land used for food production. This approach can potentially meet a significant fraction of liquid fuel demand in the USA and reduce greenhouse gas emission while ensuring that current crop land under food production is not used for growing biofuel feedstocks.

  20. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  1. Optimization of Comminution Circuit Throughput and Product Size Distribution by Simulation and Control

    SciTech Connect

    S.K. Kawatra; T.C. Eisele; T. Weldum; D. Larsen; R. Mariani; J. Pletka

    2005-07-01

    The goal of this project was to improve energy efficiency of industrial crushing and grinding operations (comminution). Mathematical models of the comminution process were used to study methods for optimizing the product size distribution, so that the amount of excessively fine material produced could be minimized. The goal was to save energy by reducing the amount of material that was ground below the target size, while simultaneously reducing the quantity of materials wasted as ''slimes'' that were too fine to be useful. Extensive plant sampling and mathematical modeling of the grinding circuits was carried out to determine how to correct this problem. The approaches taken included (1) Modeling of the circuit to determine process bottlenecks that restrict flowrates in one area while forcing other parts of the circuit to overgrind the material; (2) Modeling of hydrocyclones to determine the mechanisms responsible for retaining fine, high-density particles in the circuit until they are overground, and improving existing models to accurately account for this behavior; and (3) Evaluation of the potential of advanced technologies to improve comminution efficiency and produce sharper product size distributions with less overgrinding. The mathematical models were used to simulate novel circuits for minimizing overgrinding and increasing throughput, and it is estimated that a single plant grinding 15 million tons of ore per year saves up to 82.5 million kWhr/year, or 8.6 x 10{sup 11} BTU/year. Implementation of this technology in the midwestern iron ore industry, which grinds an estimated 150 million tons of ore annually to produce over 50 million tons of iron ore concentrate, would save an estimated 1 x 10{sup 13} BTU/year.

  2. A genetic algorithm approach in interface and surface structure optimization

    SciTech Connect

    Zhang, Jian

    2010-01-01

    The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the material structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.

  3. A thermodynamic approach to the affinity optimization of drug candidates.

    PubMed

    Freire, Ernesto

    2009-11-01

    High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.

  4. A free boundary approach to shape optimization problems.

    PubMed

    Bucur, D; Velichkov, B

    2015-09-13

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  5. A free boundary approach to shape optimization problems

    PubMed Central

    Bucur, D.; Velichkov, B.

    2015-01-01

    The analysis of shape optimization problems involving the spectrum of the Laplace operator, such as isoperimetric inequalities, has known in recent years a series of interesting developments essentially as a consequence of the infusion of free boundary techniques. The main focus of this paper is to show how the analysis of a general shape optimization problem of spectral type can be reduced to the analysis of particular free boundary problems. In this survey article, we give an overview of some very recent technical tools, the so-called shape sub- and supersolutions, and show how to use them for the minimization of spectral functionals involving the eigenvalues of the Dirichlet Laplacian, under a volume constraint. PMID:26261362

  6. Estimation of design sea ice thickness with maximum entropy distribution by particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng

    2016-06-01

    The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.

  7. Assessment of grid-friendly collective optimization framework for distributed energy resources

    SciTech Connect

    Pensini, Alessandro; Robinson, Matthew; Heine, Nicholas; Stadler, Michael; Mammoli, Andrea

    2015-11-04

    Distributed energy resources have the potential to provide services to facilities and buildings at lower cost and environmental impact in comparison to traditional electric-gridonly services. The reduced cost could result from a combination of higher system efficiency and exploitation of electricity tariff structures. Traditionally, electricity tariffs are designed to encourage the use of ‘off peak’ power and discourage the use of ‘onpeak’ power, although recent developments in renewable energy resources and distributed generation systems (such as their increasing levels of penetration and their increased controllability) are resulting in pressures to adopt tariffs of increasing complexity. Independently of the tariff structure, more or less sophisticated methods exist that allow distributed energy resources to take advantage of such tariffs, ranging from simple pre-planned schedules to Software-as-a-Service schedule optimization tools. However, as the penetration of distributed energy resources increases, there is an increasing chance of a ‘tragedy of the commons’ mechanism taking place, where taking advantage of tariffs for local benefit can ultimately result in degradation of service and higher energy costs for all. In this work, we use a scheduling optimization tool, in combination with a power distribution system simulator, to investigate techniques that could mitigate the deleterious effect of ‘selfish’ optimization, so that the high-penetration use of distributed energy resources to reduce operating costs remains advantageous while the quality of service and overall energy cost to the community is not affected.

  8. Optimization of a point-focusing, distributed receiver solar thermal electric system

    NASA Technical Reports Server (NTRS)

    Pons, R. L.

    1979-01-01

    This paper presents an approach to optimization of a solar concept which employs solar-to-electric power conversion at the focus of parabolic dish concentrators. The optimization procedure is presented through a series of trade studies, which include the results of optical/thermal analyses and individual subsystem trades. Alternate closed-cycle and open-cycle Brayton engines and organic Rankine engines are considered to show the influence of the optimization process, and various storage techniques are evaluated, including batteries, flywheels, and hybrid-engine operation.

  9. Aircraft optimization by a system approach: Achievements and trends

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.

  10. Optimal perfusion during cardiopulmonary bypass: an evidence-based approach.

    PubMed

    Murphy, Glenn S; Hessel, Eugene A; Groom, Robert C

    2009-05-01

    In this review, we summarize the best available evidence to guide the conduct of adult cardiopulmonary bypass (CPB) to achieve "optimal" perfusion. At the present time, there is considerable controversy relating to appropriate management of physiologic variables during CPB. Low-risk patients tolerate mean arterial blood pressures of 50-60 mm Hg without apparent complications, although limited data suggest that higher-risk patients may benefit from mean arterial blood pressures >70 mm Hg. The optimal hematocrit on CPB has not been defined, with large data-based investigations demonstrating that both severe hemodilution and transfusion of packed red blood cells increase the risk of adverse postoperative outcomes. Oxygen delivery is determined by the pump flow rate and the arterial oxygen content and organ injury may be prevented during more severe hemodilutional anemia by increasing pump flow rates. Furthermore, the optimal temperature during CPB likely varies with physiologic goals, and recent data suggest that aggressive rewarming practices may contribute to neurologic injury. The design of components of the CPB circuit may also influence tissue perfusion and outcomes. Although there are theoretical advantages to centrifugal blood pumps over roller pumps, it has been difficult to demonstrate that the use of centrifugal pumps improves clinical outcomes. Heparin coating of the CPB circuit may attenuate inflammatory and coagulation pathways, but has not been clearly demonstrated to reduce major morbidity and mortality. Similarly, no distinct clinical benefits have been observed when open venous reservoirs have been compared to closed systems. In conclusion, there are currently limited data upon which to confidently make strong recommendations regarding how to conduct optimal CPB. There is a critical need for randomized trials assessing clinically significant outcomes, particularly in high-risk patients. PMID:19372313

  11. Lifetime optimization of wireless sensor network by a better nodes positioning and energy distribution

    NASA Astrophysics Data System (ADS)

    Lebreton, J. M.; Murad, N. M.

    2014-10-01

    The purpose of this paper is to propose a method of energy distribution on a Wireless Sensor Network (WSN). Nodes are randomly positioned and the sink is placed at the centre of the surface. Simulations show that relay nodes around the sink are too much requested to convey data, which substantially reduces their lifetime. So, several algorithmic solutions are presented to optimize the energy distribution on each node, compared to the classical uniform energy distribution. Their performance is discussed in terms of failure rate of data transmission and network lifetime. Moreover, the total energy distributed on all nodes before the deployment is invariable and some non-uniform energy distributions are created. Finally, simulations show that every energy distributions greatly improve the WSN lifetime and decrease the failure rate of data transmission.

  12. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  13. Optimal preview game theory approach to vehicle stability controller design

    NASA Astrophysics Data System (ADS)

    Tamaddoni, Seyed Hossein; Taheri, Saied; Ahmadian, Mehdi

    2011-12-01

    Dynamic game theory brings together different features that are keys to many situations in control design: optimisation behaviour, the presence of multiple agents/players, enduring consequences of decisions and robustness with respect to variability in the environment, etc. In the presented methodology, vehicle stability is represented by a cooperative dynamic/difference game such that its two agents (players), namely the driver and the direct yaw controller (DYC), are working together to provide more stability to the vehicle system. While the driver provides the steering wheel control, the DYC control algorithm is obtained by the Nash game theory to ensure optimal performance as well as robustness to disturbances. The common two-degrees-of-freedom vehicle-handling performance model is put into discrete form to develop the game equations of motion. To evaluate the developed control algorithm, CarSim with its built-in nonlinear vehicle model along with the Pacejka tire model is used. The control algorithm is evaluated for a lane change manoeuvre, and the optimal set of steering angle and corrective yaw moment is calculated and fed to the test vehicle. Simulation results show that the optimal preview control algorithm can significantly reduce lateral velocity, yaw rate, and roll angle, which all contribute to enhancing vehicle stability.

  14. RF cavity design exploiting a new derivative-free trust region optimization approach.

    PubMed

    Hassan, Abdel-Karim S O; Abdel-Malek, Hany L; Mohamed, Ahmed S A; Abuelfadl, Tamer M; Elqenawy, Ahmed E

    2015-11-01

    In this article, a novel derivative-free (DF) surrogate-based trust region optimization approach is proposed. In the proposed approach, quadratic surrogate models are constructed and successively updated. The generated surrogate model is then optimized instead of the underlined objective function over trust regions. Truncated conjugate gradients are employed to find the optimal point within each trust region. The approach constructs the initial quadratic surrogate model using few data points of order O(n), where n is the number of design variables. The proposed approach adopts weighted least squares fitting for updating the surrogate model instead of interpolation which is commonly used in DF optimization. This makes the approach more suitable for stochastic optimization and for functions subject to numerical error. The weights are assigned to give more emphasis to points close to the current center point. The accuracy and efficiency of the proposed approach are demonstrated by applying it to a set of classical bench-mark test problems. It is also employed to find the optimal design of RF cavity linear accelerator with a comparison analysis with a recent optimization technique. PMID:26644929

  15. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  16. A work stealing based approach for enabling scalable optimal sequence homology detection

    SciTech Connect

    Daily, Jeffrey A.; Kalyanaraman, Anantharaman; Krishnamoorthy, Sriram; Vishnu, Abhinav

    2015-05-01

    Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. Here, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for 2.56M sequences on up to 8K cores show parallel efficiencies of ~ 75-100%, a time-to-solution of 33s, and a rate of ~ 2.0M alignments per second.

  17. Sensitivity-based optimal capacitor placement on a radial distribution feeder

    SciTech Connect

    Bala, J.L. Jr.; Taylor, R.M.

    1995-12-31

    Optimal capacitor placement determines the size, type, and location of capacitors to be installed on a radial distribution feeder that will reduce peak power and energy losses while minimizing the costs of investment and installation of the capacitor banks. This paper describes a sensitivity-based optimal placement of capacitors that employs a new load characterization scheme using a voltage-current-angle-logger. The proposed method allows modeling of loads of different power factors for different portions of the distribution feeder. The optimal solution is obtained by testing various combinations of capacitor banks (based on the smallest bank size specified by the user) and candidate nodes along the distribution feeder, and calculating the resultant savings. In order to reduce solution time, the candidate nodes are ranked according to their sensitivity factors. The highest ranking nodes are considered first in the optimization process. At a node where the placement of a capacitor yields the greatest savings, a fixed capacitor bank is assigned. The procedure is terminated when the maximum allowable number of capacitor banks have been placed or until no savings improvement can be found. Based on the results of the placement of fixed capacitor banks for different loading levels, the switching capacitor banks can be determined. The proposed method has been applied to a typical distribution feeder of the local utility. Computer simulation results are very promising and they indicate that the proposed method yields large annual savings.

  18. Optimization of pressure gauge locations for water distribution systems using entropy theory.

    PubMed

    Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon

    2012-12-01

    It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.

  19. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  20. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  1. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show

  2. A Residuals Approach to Filtering, Smoothing and Identification for Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1985-01-01

    An approach for state estimation and identification of spatially distributed parameters embedded in static distributed (elliptic) system models is advanced. The method of maximum likelihood is used to find parameter values that maximize a likelihood functional for the system model, or equivalently, that minimize the negative logarithm of this functional. To find the minimum, a Newton-Raphson search is conducted that from an initial estimate generates a convergent sequence of parameter estimates. For simplicity, a Gauss-Markov approach is used to approximate the Hessian in terms of products of first derivatives. The gradient and approximate Hessian are computed by first arranging the negative log likelihood functional into a form based on the square root factorization of the predicted covariance of the measurement process. The resulting data processing approach, referred to here by the new term of predicted data covariance square root filtering, makes the gradient and approximate Hessian calculations very simple. A closely related set of state estimates is also produced by the maximum likelihood method: smoothed estimates that are optimal in a conditional mean sense and filtered estimates that emerge from the predicted data covariance square root filter.

  3. A Wolf Pack Algorithm for Active and Reactive Power Coordinated Optimization in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Zhuang, H. M.; Jiang, X. J.

    2016-08-01

    This paper presents an active and reactive power dynamic optimization model for active distribution network (ADN), whose control variables include the output of distributed generations (DGs), charge or discharge power of energy storage system (ESS) and reactive power from capacitor banks. To solve the high-dimension nonlinear optimization model, a new heuristic swarm intelligent method, namely wolf pack algorithm (WPA) with better global convergence and computational robustness, is adapted so that the network loss minimization can be achieved. In this paper, the IEEE33-bus system is used to show the effectiveness of WPA technique compared with other techniques. Numerical tests on the modified IEEE 33-bus system show that WPA for active and reactive multi-period optimization of ADN is exact and effective.

  4. An optimized encoding method for secure key distribution by swapping quantum entanglement and its extension

    NASA Astrophysics Data System (ADS)

    Gao, Gan

    2015-08-01

    Song [Song D 2004 Phys. Rev. A 69 034301] first proposed two key distribution schemes with the symmetry feature. We find that, in the schemes, the private channels which Alice and Bob publicly announce the initial Bell state or the measurement result through are not needed in discovering keys, and Song’s encoding methods do not arrive at the optimization. Here, an optimized encoding method is given so that the efficiencies of Song’s schemes are improved by 7/3 times. Interestingly, this optimized encoding method can be extended to the key distribution scheme composed of generalized Bell states. Project supported by the National Natural Science Foundation of China (Grant No. 11205115), the Program for Academic Leader Reserve Candidates in Tongling University (Grant No. 2014tlxyxs30), and the 2014-year Program for Excellent Youth Talents in University of Anhui Province, China.

  5. The functional response predicts the effect of resource distribution on the optimal movement rate of consumers.

    PubMed

    Calcagno, Vincent; Grognard, Frédéric; Hamelin, Frédéric M; Wajnberg, Éric; Mailleret, Ludovic

    2014-12-01

    Understanding how often individuals should move when foraging over patchy habitats is a central question in ecology. By combining optimality and functional response theories, we show analytically how the optimal movement rate varies with the average resource level (enrichment) and resource distribution (patch heterogeneity). We find that the type of functional response predicts the effect of enrichment in homogeneous habitats: enrichment should decrease movement for decelerating functional responses, but increase movement for accelerating responses. An intermediate resource level thus maximises movement for type-III responses. Counterintuitively, greater movement costs favour an increase in movement. In heterogeneous habitats predictions further depend on how enrichment alters the variance of resource distribution. Greater patch variance always increases the optimal rate of movement, except for type-IV functional responses. While the functional response is well established as a fundamental determinant of consumer-resource dynamics, our results indicate its importance extends to the understanding of individual movement strategies.

  6. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  7. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    SciTech Connect

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  8. A stochastic optimization method to estimate the spatial distribution of a pathogen from a sample.

    PubMed

    Parnell, S; Gottwald, T R; Irey, M S; Luo, W; van den Bosch, F

    2011-10-01

    Information on the spatial distribution of plant disease can be utilized to implement efficient and spatially targeted disease management interventions. We present a pathogen-generic method to estimate the spatial distribution of a plant pathogen using a stochastic optimization process which is epidemiologically motivated. Based on an initial sample, the method simulates the individual spread processes of a pathogen between patches of host to generate optimized spatial distribution maps. The method was tested on data sets of Huanglongbing of citrus and was compared with a kriging method from the field of geostatistics using the well-established kappa statistic to quantify map accuracy. Our method produced accurate maps of disease distribution with kappa values as high as 0.46 and was able to outperform the kriging method across a range of sample sizes based on the kappa statistic. As expected, map accuracy improved with sample size but there was a high amount of variation between different random sample placements (i.e., the spatial distribution of samples). This highlights the importance of sample placement on the ability to estimate the spatial distribution of a plant pathogen and we thus conclude that further research into sampling design and its effect on the ability to estimate disease distribution is necessary. PMID:21916625

  9. High direct drive illumination uniformity achieved by multi-parameter optimization approach: a case study of Shenguang III laser facility.

    PubMed

    Tian, Chao; Chen, Jia; Zhang, Bo; Shan, Lianqiang; Zhou, Weimin; Liu, Dongxiao; Bi, Bi; Zhang, Feng; Wang, Weiwu; Zhang, Baohan; Gu, Yuqiu

    2015-05-01

    The uniformity of the compression driver is of fundamental importance for inertial confinement fusion (ICF). In this paper, the illumination uniformity on a spherical capsule during the initial imprinting phase directly driven by laser beams has been considered. We aim to explore methods to achieve high direct drive illumination uniformity on laser facilities designed for indirect drive ICF. There are many parameters that would affect the irradiation uniformity, such as Polar Direct Drive displacement quantity, capsule radius, laser spot size and intensity distribution within a laser beam. A novel approach to reduce the root mean square illumination non-uniformity based on multi-parameter optimizing approach (particle swarm optimization) is proposed, which enables us to obtain a set of optimal parameters over a large parameter space. Finally, this method is applied to improve the direct drive illumination uniformity provided by Shenguang III laser facility and the illumination non-uniformity is reduced from 5.62% to 0.23% for perfectly balanced beams. Moreover, beam errors (power imbalance and pointing error) are taken into account to provide a more practical solution and results show that this multi-parameter optimization approach is effective.

  10. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries.

  11. New Approaches to HSCT Multidisciplinary Design and Optimization

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Fulton, R. E.; Mistree, F.

    1996-01-01

    The successful development of a capable and economically viable high speed civil transport (HSCT) is perhaps one of the most challenging tasks in aeronautics for the next two decades. At its heart it is fundamentally the design of a complex engineered system that has significant societal, environmental and political impacts. As such it presents a formidable challenge to all areas of aeronautics, and it is therefore a particularly appropriate subject for research in multidisciplinary design and optimization (MDO). In fact, it is starkly clear that without the availability of powerful and versatile multidisciplinary design, analysis and optimization methods, the design, construction and operation of im HSCT simply cannot be achieved. The present research project is focused on the development and evaluation of MDO methods that, while broader and more general in scope, are particularly appropriate to the HSCT design problem. The research aims to not only develop the basic methods but also to apply them to relevant examples from the NASA HSCT R&D effort. The research involves a three year effort aimed first at the HSCT MDO problem description, next the development of the problem, and finally a solution to a significant portion of the problem.

  12. Discovery and Optimization of Materials Using Evolutionary Approaches.

    PubMed

    Le, Tu C; Winkler, David A

    2016-05-25

    Materials science is undergoing a revolution, generating valuable new materials such as flexible solar panels, biomaterials and printable tissues, new catalysts, polymers, and porous materials with unprecedented properties. However, the number of potentially accessible materials is immense. Artificial evolutionary methods such as genetic algorithms, which explore large, complex search spaces very efficiently, can be applied to the identification and optimization of novel materials more rapidly than by physical experiments alone. Machine learning models can augment experimental measurements of materials fitness to accelerate identification of useful and novel materials in vast materials composition or property spaces. This review discusses the problems of large materials spaces, the types of evolutionary algorithms employed to identify or optimize materials, and how materials can be represented mathematically as genomes, describes fitness landscapes and mutation operators commonly employed in materials evolution, and provides a comprehensive summary of published research on the use of evolutionary methods to generate new catalysts, phosphors, and a range of other materials. The review identifies the potential for evolutionary methods to revolutionize a wide range of manufacturing, medical, and materials based industries. PMID:27171499

  13. A stochastic optimization approach for integrated urban water resource planning.

    PubMed

    Huang, Y; Chen, J; Zeng, S; Sun, F; Dong, X

    2013-01-01

    Urban water is facing the challenges of both scarcity and water quality deterioration. Consideration of nonconventional water resources has increasingly become essential over the last decade in urban water resource planning. In addition, rapid urbanization and economic development has led to an increasing uncertain water demand and fragile water infrastructures. Planning of urban water resources is thus in need of not only an integrated consideration of both conventional and nonconventional urban water resources including reclaimed wastewater and harvested rainwater, but also the ability to design under gross future uncertainties for better reliability. This paper developed an integrated nonlinear stochastic optimization model for urban water resource evaluation and planning in order to optimize urban water flows. It accounted for not only water quantity but also water quality from different sources and for different uses with different costs. The model successfully applied to a case study in Beijing, which is facing a significant water shortage. The results reveal how various urban water resources could be cost-effectively allocated by different planning alternatives and how their reliabilities would change.

  14. Characterizing and Optimizing Photocathode Laser Distributions for Ultra-low Emittance Electron Beam Operations

    SciTech Connect

    Zhou, F.; Bohler, D.; Ding, Y.; Gilevich, S.; Huang, Z.; Loos, H.; Ratner, D.; Vetter, S.

    2015-12-07

    Photocathode RF gun has been widely used for generation of high-brightness electron beams for many different applications. We found that the drive laser distributions in such RF guns play important roles in minimizing the electron beam emittance. Characterizing the laser distributions with measurable parameters and optimizing beam emittance versus the laser distribution parameters in both spatial and temporal directions are highly desired for high-brightness electron beam operation. In this paper, we report systematic measurements and simulations of emittance dependence on the measurable parameters represented for spatial and temporal laser distributions at the photocathode RF gun systems of Linac Coherent Light Source. The tolerable parameter ranges for photocathode drive laser distributions in both directions are presented for ultra-low emittance beam operations.

  15. Piece-wise mixed integer programming for optimal sizing of surge control devices in water distribution systems

    NASA Astrophysics Data System (ADS)

    Skulovich, Olya; Bent, Russell; Judi, David; Perelman, Lina Sela; Ostfeld, Avi

    2015-06-01

    Despite their potential catastrophic impact, transients are often ignored or presented ad hoc when designing water distribution systems. To address this problem, we introduce a new piece-wise function fitting model that is integrated with mixed integer programming to optimally place and size surge tanks for transient control. The key features of the algorithm are a model-driven discretization of the search space, a linear approximation nonsmooth system response surface to transients, and a mixed integer linear programming optimization. Results indicate that high quality solutions can be obtained within a reasonable number of function evaluations and demonstrate the computational effectiveness of the approach through two case studies. The work investigates one type of surge control devices (closed surge tank) for a specified set of transient events. The performance of the algorithm relies on the assumption that there exists a smooth relationship between the objective function and tank size. Results indicate the potential of the approach for the optimal surge control design in water systems.

  16. A simple approach to metal hydride alloy optimization

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Miller, C.; Landel, R. F.

    1976-01-01

    Certain metals and related alloys can combine with hydrogen in a reversible fashion, so that on being heated, they release a portion of the gas. Such materials may find application in the large scale storage of hydrogen. Metal and alloys which show high dissociation pressure at low temperatures, and low endothermic heat of dissociation, and are therefore desirable for hydrogen storage, give values of the Hildebrand-Scott solubility parameter that lie between 100-118 Hildebrands, (Ref. 1), close to that of dissociated hydrogen. All of the less practical storage systems give much lower values of the solubility parameter. By using the Hildebrand solubility parameter as a criterion, and applying the mixing rule to combinations of known alloys and solid solutions, correlations are made to optimize alloy compositions and maximize hydrogen storage capacity.

  17. Particle Swarm Optimization Approach in a Consignment Inventory System

    NASA Astrophysics Data System (ADS)

    Sharifyazdi, Mehdi; Jafari, Azizollah; Molamohamadi, Zohreh; Rezaeiahari, Mandana; Arshizadeh, Rahman

    2009-09-01

    Consignment Inventory (CI) is a kind of inventory which is in the possession of the customer, but is still owned by the supplier. This creates a condition of shared risk whereby the supplier risks the capital investment associated with the inventory while the customer risks dedicating retail space to the product. This paper considers both the vendor's and the retailers' costs in an integrated model. The vendor here is a warehouse which stores one type of product and supplies it at the same wholesale price to multiple retailers who then sell the product in independent markets at retail prices. Our main aim is to design a CI system which generates minimum costs for the two parties. Here a Particle Swarm Optimization (PSO) algorithm is developed to calculate the proper values. Finally a sensitivity analysis is performed to examine the effects of each parameter on decision variables. Also PSO performance is compared with genetic algorithm.

  18. The Goodwyn Field - an integrated approach to optimal field development

    SciTech Connect

    Newman, S.H.; Taylor, N.C.

    1996-12-31

    The Goodwyn gas field is located some 130 km offshore of Western Australia in a water depth of 130m and is currently under development. First production commenced in February 1995. The rich gas (CGR - 90 bbl/MMscf) is trapped within fluvio-deltaic reservoirs of the Triassic Mungaroo Formation In a large notated fault block on the northwestern edge of the Dampier Sub-Basin. The reservoir units, ranging in thickness between 30 and 80 meters, dip gently below the overlying Cretaceous shales which provide the updip seal. The target production levels and ultimate recovery are based on the optimization of gas recycling along strike in the individual reservoir units. The success of the development plan depends on an accurate model of the reservoir architecture. Prior to development drilling, only four wells had penetrated the primary reservoir units. Successful development planning required the recognition and management of key subsurface uncertainties. Integration between seismic interpretation, stochastic reservoir modelling and reservoir engineering proved essential to achieve the development objectives. A detailed evaluation of the reservoir stratigraphy, sedimentology, high resolution seismic and high resolution palynology provided the framework for the 3D stochastic reservoir modeling. The modelling converted the information into a number of geological realizations which were then used to generate a family of dynamic reservoir models. The location of the various development wells was thus optimized on a risked basis. Seven development wells have now been drilled and although these wells have shown that there is more variability than originally envisaged, the broad framework of the reservoir model remains robust.

  19. The Goodwyn Field - an integrated approach to optimal field development

    SciTech Connect

    Newman, S.H.; Taylor, N.C.

    1996-01-01

    The Goodwyn gas field is located some 130 km offshore of Western Australia in a water depth of 130m and is currently under development. First production commenced in February 1995. The rich gas (CGR - 90 bbl/MMscf) is trapped within fluvio-deltaic reservoirs of the Triassic Mungaroo Formation In a large notated fault block on the northwestern edge of the Dampier Sub-Basin. The reservoir units, ranging in thickness between 30 and 80 meters, dip gently below the overlying Cretaceous shales which provide the updip seal. The target production levels and ultimate recovery are based on the optimization of gas recycling along strike in the individual reservoir units. The success of the development plan depends on an accurate model of the reservoir architecture. Prior to development drilling, only four wells had penetrated the primary reservoir units. Successful development planning required the recognition and management of key subsurface uncertainties. Integration between seismic interpretation, stochastic reservoir modelling and reservoir engineering proved essential to achieve the development objectives. A detailed evaluation of the reservoir stratigraphy, sedimentology, high resolution seismic and high resolution palynology provided the framework for the 3D stochastic reservoir modeling. The modelling converted the information into a number of geological realizations which were then used to generate a family of dynamic reservoir models. The location of the various development wells was thus optimized on a risked basis. Seven development wells have now been drilled and although these wells have shown that there is more variability than originally envisaged, the broad framework of the reservoir model remains robust.

  20. Optimizing Geographic Allotment of Photovoltaic Capacity in a Distributed Generation Setting: Preprint

    SciTech Connect

    Urquhart, B.; Sengupta, M.; Keller, J.

    2012-09-01

    A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.

  1. Optimal Diagnostic Approaches for Patients with Suspected Small Bowel Disease

    PubMed Central

    Kim, Jae Hyun; Moon, Won

    2016-01-01

    While the domain of gastrointestinal endoscopy has made great strides over the last several decades, endoscopic assessment of the small bowel continues to be challenging. Recently, with the development of new technology including video capsule endoscopy, device-assisted enteroscopy, and computed tomography/magnetic resonance enterography, a more thorough investigation of the small bowel is possible. In this article, we review the systematic approach for patients with suspected small bowel disease based on these advanced endoscopic and imaging systems. PMID:27334413

  2. Observation of the Field, Current and Force Distributions in an Optimized Superconducting Levitation with Translational Symmetry

    NASA Astrophysics Data System (ADS)

    Ye, Chang-Qing; Ma, Guang-Tong; Liu, Kun; Wang, Jia-Su

    2016-08-01

    The superconducting levitation realized by immersing the high-temperature superconductors (HTSs) into nonuniform magnetic field is deemed promising in a wide range of industrial applications such as maglev transportation and kinetic energy storage. Using a well-established electromagnetic model to mathematically describe the HTS, we have developed an efficient scheme that is capable of intelligently and globally optimizing the permanent magnet guideway (PMG) with single or multiple HTSs levitated above for the maglev transportation applications. With maximizing the levitation force as the principal objective, we optimized the dimensions of a Halbach-derived PMG to observe how the field, current and force distribute inside the HTSs when the optimized situation is achieved. Using a pristine PMG as a reference, we have analyzed the critical issues for enhancing the levitation force through comparing the field, current and force distributions between the optimized and pristine PMGs. It was also found that the optimized dimensions of the PMG are highly dependent upon the levitated HTS. Moreover, the guidance force is not always contradictory to the levitation force and may also be enhanced when the levitation force is prescribed to be the principle objective, depending on the configuration of levitation system and lateral displacement.

  3. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  4. Reviving oscillation with optimal spatial period of frequency distribution in coupled oscillators

    NASA Astrophysics Data System (ADS)

    Deng, Tongfa; Liu, Weiqing; Zhu, Yun; Xiao, Jinghua; Kurths, Jürgen

    2016-09-01

    The spatial distributions of system's frequencies have significant influences on the critical coupling strengths for amplitude death (AD) in coupled oscillators. We find that the left and right critical coupling strengths for AD have quite different relations to the increasing spatial period m of the frequency distribution in coupled oscillators. The left one has a negative linear relationship with m in log-log axis for small initial frequency mismatches while remains constant for large initial frequency mismatches. The right one is in quadratic function relation with spatial period m of the frequency distribution in log-log axis. There is an optimal spatial period m0 of frequency distribution with which the coupled system has a minimal critical strength to transit from an AD regime to reviving oscillation. Moreover, the optimal spatial period m0 of the frequency distribution is found to be related to the system size √{ N } . Numerical examples are explored to reveal the inner regimes of effects of the spatial frequency distribution on AD.

  5. A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems

    NASA Astrophysics Data System (ADS)

    Ali, Hegagi M.; Pereira, Fernando Lobo; Gama, Sílvio M. A.

    2016-09-01

    In this paper, we discuss a new general formulation of fractional optimal control problems whose performance index is in the fractional integral form and the dynamics are given by a set of fractional differential equations in the Caputo sense. We use a new approach to prove necessary conditions of optimality in the form of Pontryagin maximum principle for fractional nonlinear optimal control problems. Moreover, a new method based on a generalization of the Mittag-Leffler function is used to solving this class of fractional optimal control problems. A simple example is provided to illustrate the effectiveness of our main result.

  6. High-throughput screening for lead optimization: a rational approach.

    PubMed

    Bajpai, M; Adkison, K K

    2000-01-01

    Genetics, combinatorial chemistry and automation have greatly increased the number of therapeutic programs and compounds in the pharmaceutical industry pipeline. The increase in the number of new molecular entities (NMEs) has led to changes in the process by which compounds are evaluated during drug discovery and selected for clinical development. There is a need for the earlier determination of the absorption, distribution and elimination characteristics of NMEs, and drug metabolism scientists are working to develop higher-throughput in vitro screens for absorption, distribution and metabolism of compounds. These screens rely on advancements in analytical technology and molecular biology, and frequently use human or 'humanized' tissues. Throughput to determine in vivo pharmacokinetics has also progressed with the use of mixture dosing and sample pooling methods. The continued refinement of in vitro and in vivo ADME methods will allow the industry to evaluate the absorption and disposition characteristics of larger numbers of molecules and will ultimately allow the prediction of human pharmacokinetics at early stages of the development process.

  7. Access Control for Agent-based Computing: A Distributed Approach.

    ERIC Educational Resources Information Center

    Antonopoulos, Nick; Koukoumpetsos, Kyriakos; Shafarenko, Alex

    2001-01-01

    Discusses the mobile software agent paradigm that provides a foundation for the development of high performance distributed applications and presents a simple, distributed access control architecture based on the concept of distributed, active authorization entities (lock cells), any combination of which can be referenced by an agent to provide…

  8. Fractional System Identification: An Approach Using Continuous Order-Distributions

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Lorenzo, Carl F.

    1999-01-01

    This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.

  9. A comparison between gradient descent and stochastic approaches for parameter optimization of a sea ice model

    NASA Astrophysics Data System (ADS)

    Sumata, H.; Kauker, F.; Gerdes, R.; Köberle, C.; Karcher, M.

    2013-07-01

    Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean-sea ice model of the Arctic, and applicability and efficiency of the respective methods were examined. One optimization utilizes a finite difference (FD) method based on a traditional gradient descent approach, while the other adopts a micro-genetic algorithm (μGA) as an example of a stochastic approach. The optimizations were performed by minimizing a cost function composed of model-data misfit of ice concentration, ice drift velocity and ice thickness. A series of optimizations were conducted that differ in the model formulation ("smoothed code" versus standard code) with respect to the FD method and in the population size and number of possibilities with respect to the μGA method. The FD method fails to estimate optimal parameters due to the ill-shaped nature of the cost function caused by the strong non-linearity of the system, whereas the genetic algorithms can effectively estimate near optimal parameters. The results of the study indicate that the sophisticated stochastic approach (μGA) is of practical use for parameter optimization of a coupled ocean-sea ice model with a medium-sized horizontal resolution of 50 km × 50 km as used in this study.

  10. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way. PMID:26497359

  11. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  12. Approaching the optimal transurethral resection of a bladder tumor.

    PubMed

    Jurewicz, Michael; Soloway, Mark S

    2014-06-01

    A complete transurethral resection of a bladder tumor (TURBT) is essential for adequately diagnosing, staging, and treating bladder cancer. A TURBT is deceptively difficult and is a highly underappreciated procedure. An incomplete resection is the major reason for the high incidence of recurrence following initial transurethral resection and thus to the suboptimal care of our patients. Our objective was to review the preoperative, intraoperative, and postoperative considerations for performing an optimal TURBT. The European Association of Urology, Society of International Urology, and The American Urological Association guidelines emphasize a complete resection of all visible tumor during a TURBT. This review will emphasize the various techniques and treatments, including photodynamic cystoscopy, intravesical chemotherapy, and a perioperative checklist, that can be used to help to enable a complete resection and reduce the recurrence rate. A Medline/PubMed search was completed for original and review articles related to transurethral resection and the treatment of non-muscle-invasive bladder cancer. The major findings were analyzed and are presented from large prospective, retrospective, and review studies.

  13. Approaching the optimal transurethral resection of a bladder tumor

    PubMed Central

    Jurewicz, Michael; Soloway, Mark S.

    2014-01-01

    A complete transurethral resection of a bladder tumor (TURBT) is essential for adequately diagnosing, staging, and treating bladder cancer. A TURBT is deceptively difficult and is a highly underappreciated procedure. An incomplete resection is the major reason for the high incidence of recurrence following initial transurethral resection and thus to the suboptimal care of our patients. Our objective was to review the preoperative, intraoperative, and postoperative considerations for performing an optimal TURBT. The European Association of Urology, Society of International Urology, and The American Urological Association guidelines emphasize a complete resection of all visible tumor during a TURBT. This review will emphasize the various techniques and treatments, including photodynamic cystoscopy, intravesical chemotherapy, and a perioperative checklist, that can be used to help to enable a complete resection and reduce the recurrence rate. A Medline/PubMed search was completed for original and review articles related to transurethral resection and the treatment of non-muscle-invasive bladder cancer. The major findings were analyzed and are presented from large prospective, retrospective, and review studies. PMID:26328154

  14. Dynamic Range Size Analysis of Territorial Animals: An Optimality Approach.

    PubMed

    Tao, Yun; Börger, Luca; Hastings, Alan

    2016-10-01

    Home range sizes of territorial animals are often observed to vary periodically in response to seasonal changes in foraging opportunities. Here we develop the first mechanistic model focused on the temporal dynamics of home range expansion and contraction in territorial animals. We demonstrate how simple movement principles can lead to a rich suite of range size dynamics, by balancing foraging activity with defensive requirements and incorporating optimal behavioral rules into mechanistic home range analysis. Our heuristic model predicts three general temporal patterns that have been observed in empirical studies across multiple taxa. First, a positive correlation between age and territory quality promotes shrinking home ranges over an individual's lifetime, with maximal range size variability shortly before the adult stage. Second, poor sensory information, low population density, and large resource heterogeneity may all independently facilitate range size instability. Finally, aggregation behavior toward forage-rich areas helps produce divergent home range responses between individuals from different age classes. This model has broad applications for addressing important unknowns in animal space use, with potential applications also in conservation and health management strategies. PMID:27622879

  15. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  16. The 15-meter antenna performance optimization using an interdisciplinary approach

    NASA Astrophysics Data System (ADS)

    Grantham, William L.; Schroeder, Lyle C.; Bailey, Marion C.; Campbell, Thomas G.

    1988-05-01

    A 15-meter diameter deployable antenna has been built and is being used as an experimental test system with which to develop interdisciplinary controls, structures, and electromagnetics technology for large space antennas. The program objective is to study interdisciplinary issues important in optimizing large space antenna performance for a variety of potential users. The 15-meter antenna utilizes a hoop column structural concept with a gold-plated molybdenum mesh reflector. One feature of the design is the use of adjustable control cables to improve the paraboloid reflector shape. Manual adjustment of the cords after initial deployment improved surface smoothness relative to the build accuracy from 0.140 in. RMS to 0.070 in. Preliminary structural dynamics tests and near-field electromagnetic tests were made. The antenna is now being modified for further testing. Modifications include addition of a precise motorized control cord adjustment system to make the reflector surface smoother and an adaptive feed for electronic compensation of reflector surface distortions. Although the previous test results show good agreement between calculated and measured values, additional work is needed to study modelling limits for each discipline, evaluate the potential of adaptive feed compensation, and study closed-loop control performance in a dynamic environment.

  17. Evaluation of analytical and numerical approaches for the estimation of groundwater travel time distribution

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Jindal, Priyanka; Schilling, Keith E.; Wolter, Calvin F.; Takle, Eugene S.

    2012-12-01

    SummaryIt is critical that stakeholders are aware of the lag time necessary for conservation practices to demonstrate a positive impact on surface water quality. For solutes like nitrate that are transported primarily by the groundwater pathway, the lag time is a function of the groundwater travel time distribution (TTD). We used three models of varying levels of complexity to estimate the steady-state TTD of a shallow, unconfined aquifer in a small Iowa watershed: (a) analytic model, (b) GIS approach, and (c) MODFLOW model. The analytic model was the least input-intensive, whereas the GIS and MODFLOW approach required detailed data for model development. The resulting TTDs displayed an exponential distribution with good agreement among all the three methods (mean travel times ranging from 16.2 years in the analytic model, 19.6 years in GIS model and 20.5 years in MODFLOW model). The greater deviation in the analytic model was attributed to the difficulty in estimation of a representative saturated thickness in an unconfined aquifer. The correspondence between the spatial travel time distributions generated by GIS and MODFLOW was a function of the landscape position, with greater correspondence in uplands compared to floodplains. In the floodplains the land surface slope is a poor approximation of the water table gradient that is captured by the MODFLOW model but not the GIS that uses the land surface as a surrogate for the water table. Study results indicate that except for cases where there are marked differences between water table surface and land surface, simpler approaches (analytic and GIS) can be used to estimate TTDs required for the design and optimal placement of conservation practices and communicating lag times issues to the public.

  18. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  19. Multi-stage optimal design for groundwater remediation: a hybrid bi-level programming approach.

    PubMed

    Zou, Yun; Huang, Guo H; He, Li; Li, Hengliang

    2009-08-11

    This paper presents the development of a hybrid bi-level programming approach for supporting multi-stage groundwater remediation design. To investigate remediation performances, a subsurface model was employed to simulate contaminant transport. A mixed-integer nonlinear optimization model was formulated in order to evaluate different remediation strategies. Multivariate relationships based on a filtered stepwise clustering analysis were developed to facilitate the incorporation of a simulation model within a nonlinear optimization framework. By using the developed statistical relationships, predictions needed for calculating the objective function value can be quickly obtained during the search process. The main advantage of the developed approach is that the remediation strategy can be adjusted from stage to stage, which makes the optimization more realistic. The proposed approach was examined through its application to a real-world aquifer remediation case in western Canada. The optimization results based on this application can help the decision makers to comprehensively evaluate remediation performance.

  20. The multidisciplinary design optimization of a distributed propulsion blended-wing-body aircraft

    NASA Astrophysics Data System (ADS)

    Ko, Yan-Yee Andy

    The purpose of this study is to examine the multidisciplinary design optimization (MDO) of a distributed propulsion blended-wing-body (BWB) aircraft. The BWB is a hybrid shape resembling a flying wing, placing the payload in the inboard sections of the wing. The distributed propulsion concept involves replacing a small number of large engines with many smaller engines. The distributed propulsion concept considered here ducts part of the engine exhaust to exit out along the trailing edge of the wing. The distributed propulsion concept affects almost every aspect of the BWB design. Methods to model these effects and integrate them into an MDO framework were developed. The most important effect modeled is the impact on the propulsive efficiency. There has been conjecture that there will be an increase in propulsive efficiency when there is blowing out of the trailing edge of a wing. A mathematical formulation was derived to explain this. The formulation showed that the jet 'fills in' the wake behind the body, improving the overall aerodynamic/propulsion system, resulting in an increased propulsive efficiency. The distributed propulsion concept also replaces the conventional elevons with a vectored thrust system for longitudinal control. An extension of Spence's Jet Flap theory was developed to estimate the effects of this vectored thrust system on the aircraft longitudinal control. It was found to provide a reasonable estimate of the control capability of the aircraft. An MDO framework was developed, integrating all the distributed propulsion effects modeled. Using a gradient based optimization algorithm, the distributed propulsion BWB aircraft was optimized and compared with a similarly optimized conventional BWB design. Both designs are for an 800 passenger, 0.85 cruise Mach number and 7000 nmi mission. The MDO results found that the distributed propulsion BWB aircraft has a 4% takeoff gross weight and a 2% fuel weight. Both designs have similar planform shapes

  1. Physiological approach to optimal stereographic game programming: a technical guide

    NASA Astrophysics Data System (ADS)

    Martens, William L.; McRuer, Robert; Childs, C. Timothy; Viirree, Erik

    1996-04-01

    With the advent of mass distribution of consumer VR games comes an imperative to set health and safety standards for the hardware and software used to deliver stereographic content. This is particularly important for game developers who intend to present this stereographic content via head-mounted display (HMD). The visual discomfort that is commonly reported by the user of HMD-based VR games presumably could be kept to a minimum if game developers were provided with standards for the display of stereographic imagery. In this paper, we draw upon both results of research in binocular vision and practical methods from clinical optometry to develop some technical guidelines for programming stereographic games that have the end user's comfort and safety in mind. This paper will provide generate strategies for user- centered implementation of 3D virtual worlds, as well as pictorial examples demonstrating a natural means for rendering stereographic imagery more comfortable to view in games employing first-person perspective.

  2. Optimal indolence: a normative microscopic approach to work and leisure

    PubMed Central

    Niyogi, Ritwik K.; Breton, Yannick-Andre; Solomon, Rebecca B.; Conover, Kent; Shizgal, Peter; Dayan, Peter

    2014-01-01

    Dividing limited time between work and leisure when both have their attractions is a common everyday decision. We provide a normative control-theoretic treatment of this decision that bridges economic and psychological accounts. We show how our framework applies to free-operant behavioural experiments in which subjects are required to work (depressing a lever) for sufficient total time (called the price) to receive a reward. When the microscopic benefit-of-leisure increases nonlinearly with duration, the model generates behaviour that qualitatively matches various microfeatures of subjects’ choices, including the distribution of leisure bout durations as a function of the pay-off. We relate our model to traditional accounts by deriving macroscopic, molar, quantities from microscopic choices. PMID:24284898

  3. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization. PMID:26461069

  4. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    PubMed

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  5. Integrated Data-Archive and Distributed Hydrological Modelling System for Optimized Dam Operation

    NASA Astrophysics Data System (ADS)

    Shibuo, Yoshihiro; Jaranilla-Sanchez, Patricia Ann; Koike, Toshio

    2013-04-01

    In 2012, typhoon Bopha, which passed through the southern part of the Philippines, devastated the nation leaving hundreds of death tolls and significant destruction of the country. Indeed the deadly events related to cyclones occur almost every year in the region. Such extremes are expected to increase both in frequency and magnitude around Southeast Asia, during the course of global climate change. Our ability to confront such hazardous events is limited by the best available engineering infrastructure and performance of weather prediction. An example of the countermeasure strategy is, for instance, early release of reservoir water (lowering the dam water level) during the flood season to protect the downstream region of impending flood. However, over release of reservoir water affect the regional economy adversely by losing water resources, which still have value for power generation, agricultural and industrial water use. Furthermore, accurate precipitation forecast itself is conundrum task, due to the chaotic nature of the atmosphere yielding uncertainty in model prediction over time. Under these circumstances we present a novel approach to optimize contradicting objectives of: preventing flood damage via priori dam release; while sustaining sufficient water supply, during the predicted storm events. By evaluating forecast performance of Meso-Scale Model Grid Point Value against observed rainfall, uncertainty in model prediction is probabilistically taken into account, and it is then applied to the next GPV issuance for generating ensemble rainfalls. The ensemble rainfalls drive the coupled land-surface- and distributed-hydrological model to derive the ensemble flood forecast. Together with dam status information taken into account, our integrated system estimates the most desirable priori dam release through the shuffled complex evolution algorithm. The strength of the optimization system is further magnified by the online link to the Data Integration and

  6. Precision and the approach to optimality in quantum annealing processors

    NASA Astrophysics Data System (ADS)

    Johnson, Mark W.

    The last few years have seen both a significant technological advance towards the practical application of, and a growing scientific interest in the underlying behaviour of quantum annealing (QA) algorithms. A series of commercially available QA processors, most recently the D-Wave 2XTM 1000 qubit processor, have provided a valuable platform for empirical study of QA at a non-trivial scale. From this it has become clear that misspecification of Hamiltonian parameters is an important performance consideration, both for the goal of studying the underlying physics of QA, as well as that of building a practical and useful QA processor. The empirical study of the physics of QA requires a way to look beyond Hamiltonian misspecification.Recently, a solver metric called 'time-to-target' was proposed as a way to compare quantum annealing processors to classical heuristic algorithms. This approach puts emphasis on analyzing a solver's short time approach to the ground state. In this presentation I will review the processor technology, based on superconducting flux qubits, and some of the known sources of error in Hamiltonian specification. I will then discuss recent advances in reducing Hamiltonian specification error, as well as review the time-to-target metric and empirical results analyzed in this way.

  7. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    NASA Astrophysics Data System (ADS)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  8. Academic Departmental Management: An Application of an Interactive Multicriterion Optimization Approach.

    ERIC Educational Resources Information Center

    Geoffrion, A. M.; And Others

    This paper presents the conceptual development and application of a new interactive approach for multicriterion optimization to the aggregate operating problem of an academic department. This approach provides a mechanism for assisting an administrator in determing resource allocation decisions and only requires local trade-off and preference…

  9. A simple reliability-based topology optimization approach for continuum structures using a topology description function

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin; Zhi Zuo, Hao; Qing, Qixiang

    2016-07-01

    The structural configuration obtained by deterministic topology optimization may represent a low reliability level and lead to a high failure rate. Therefore, it is necessary to take reliability into account for topology optimization. By integrating reliability analysis into topology optimization problems, a simple reliability-based topology optimization (RBTO) methodology for continuum structures is investigated in this article. The two-layer nesting involved in RBTO, which is time consuming, is decoupled by the use of a particular optimization procedure. A topology description function approach (TOTDF) and a first order reliability method are employed for topology optimization and reliability calculation, respectively. The problem of the non-smoothness inherent in TOTDF is dealt with using two different smoothed Heaviside functions and the corresponding topologies are compared. Numerical examples demonstrate the validity and efficiency of the proposed improved method. In-depth discussions are also presented on the influence of different structural reliability indices on the final layout.

  10. [Optimization of organizational approaches to management of patients with atherosclerosis].

    PubMed

    Barbarash, L S; Barbarash, O L; Artamonova, G V; Sumin, A N

    2014-01-01

    Despite undoubted achievements of modern cardiology in prevention and treatment of atherosclerosis, cardiologists, neurologists, and vascular surgeons are still facing severe stenotic atherosclerotic lesions in different vascular regions, both symptomatic and asymptomatic. As a rule hemodynamically significant stenoses of different locations are found after development of acute vascular events. In this regard, active detection of arterial stenoses localized in different areas just at primary contact of patients presenting with symptoms of ischemia of various locations with care providers appears to be crucial. Further monitoring of these stenoses is also important. The article is dedicated to innovative organizational approaches to provision of healthcare to patients suffering from circulatory system diseases that have contributed to improvement of demographic situation in Kuzbass.

  11. Optimal Capacity and Location Assessment of Natural Gas Fired Distributed Generation in Residential Areas

    NASA Astrophysics Data System (ADS)

    Khalil, Sarah My

    With ever increasing use of natural gas to generate electricity, installed natural gas fired microturbines are found in residential areas to generate electricity locally. This research work discusses a generalized methodology for assessing optimal capacity and locations for installing natural gas fired microturbines in a distribution residential network. The overall objective is to place microturbines to minimize the system power loss occurring in the electrical distribution network; in such a way that the electric feeder does not need any up-gradation. The IEEE 123 Node Test Feeder is selected as the test bed for validating the developed methodology. Three-phase unbalanced electric power flow is run in OpenDSS through COM server, and the gas distribution network is analyzed using GASWorkS. The continual sensitivity analysis methodology is developed to select multiple DG locations and annual simulation is run to minimize annual average losses. The proposed placement of microturbines must be feasible in the gas distribution network and should not result into gas pipeline reinforcement. The corresponding gas distribution network is developed in GASWorkS software, and nodal pressures of the gas system are checked for various cases to investigate if the existing gas distribution network can accommodate the penetration of selected microturbines. The results indicate the optimal locations suitable to place microturbines and capacity that can be accommodated by the system, based on the consideration of overall minimum annual average losses as well as the guarantee of nodal pressure provided by the gas distribution network. The proposed method is generalized and can be used for any IEEE test feeder or an actual residential distribution network.

  12. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  13. Design of 3-dimensional complex airplane configurations with specified pressure distribution via optimization

    NASA Technical Reports Server (NTRS)

    Kubrynski, Krzysztof

    1991-01-01

    A subcritical panel method applied to flow analysis and aerodynamic design of complex aircraft configurations is presented. The analysis method is based on linearized, compressible, subsonic flow equations and indirect Dirichlet boundary conditions. Quadratic dipol and linear source distribution on flat panels are applied. In the case of aerodynamic design, the geometry which minimizes differences between design and actual pressure distribution is found iteratively, using numerical optimization technique. Geometry modifications are modeled by surface transpiration concept. Constraints in respect to resulting geometry can be specified. A number of complex 3-dimensional design examples are presented. The software is adopted to personal computers, and as result an unexpected low cost of computations is obtained.

  14. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  15. Optimizations on supply and distribution of dissolved oxygen in constructed wetlands: A review.

    PubMed

    Liu, Huaqing; Hu, Zhen; Zhang, Jian; Ngo, Huu Hao; Guo, Wenshan; Liang, Shuang; Fan, Jinlin; Lu, Shaoyong; Wu, Haiming

    2016-08-01

    Dissolved oxygen (DO) is one of the most important factors that can influence pollutants removal in constructed wetlands (CWs). However, problems of insufficient oxygen supply and inappropriate oxygen distribution commonly exist in traditional CWs. Detailed analyses of DO supply and distribution characteristics in different types of CWs were introduced. It can be concluded that atmospheric reaeration (AR) served as the promising point on oxygen intensification. The paper summarized possible optimizations of DO in CWs to improve its decontamination performance. Process (tidal flow, drop aeration, artificial aeration, hybrid systems) and parameter (plant, substrate and operating) optimizations are particularly discussed in detail. Since economic and technical defects are still being cited in current studies, future prospects of oxygen research in CWs terminate this review.

  16. Optimizing the bandwidth and noise performance of distributed multi-pump Raman amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xueming; Li, Yanhe

    2004-02-01

    Based on hybrid genetic algorithm (HGA), the signal bandwidth of the distributed multi-pump Raman amplifiers is optimized, and the corresponding noise figure is obtained. The results show that: (1) the optimal signal bandwidth Δ λ decreases with the increase of the span length L, e.g., Δ λ is 79.6 nm for L=50 km and 41.5 nm for L=100 km under our simulated conditions; (2) the relationship between Δ λ and L is approximately linear; (3) the equivalent noise figure can be negative and increases with the extension of L; (4) there are one or several global maximum signal bandwidth on the determinate conditions; (5) to realize the fixed Δ λ, several candidates can be obtained by means of HGA, as has important applications on the design of distributed multi-pump Raman amplifiers in practice.

  17. Optimal Allocation of Distributed Generation Minimizing Loss and Voltage Sag Problem-Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Biswas, S.; Goswami, S. K.

    2010-10-01

    In the present paper an attempt has been made to place the distributed generation at an optimal location so as to improve the technical as well as economical performance. Among technical issues the sag performance and the loss have been considered. Genetic algorithm method has been used as the optimization technique in this problem. For sag analysis the impact of 3-phase symmetrical short circuit faults is considered. Total load disturbed during the faults is considered as an indicator of sag performance. The solution algorithm is demonstrated on a 34 bus radial distribution system with some lateral branches. For simplicity only one DG of predefined capacity is considered. MATLAB has been used as the programming environment.

  18. Optimizing spherical light-emitting diode array for highly uniform illumination distribution by employing genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yanxia; Ji, Zhicheng; Su, Zhouping

    2013-01-01

    A numerical optimization method (genetic algorithm) is employed to design the spherical light-emitting diode (LED) array for highly uniform illumination distribution. An evaluation function related to the nonuniformity is constructed for the numerical optimization. With the minimum of evaluation function, the LED array produces the best uniformity. The genetic algorithm is used to seek the minimum of evaluation function. By this method, we design two LED arrays. In one case, LEDs are positioned symmetrically on the sphere and the illuminated target surface is a plane. However, in the other case, LEDs are positioned nonsymmetrically with a spherical target surface. Both the symmetrical and nonsymmetrical spherical LED arrays generate good uniform illumination distribution with calculated nonuniformities of 6 and 8%, respectively.

  19. Model Predictive Optimal Control of a Time-Delay Distributed-Parameter Systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2006-01-01

    This paper presents an optimal control method for a class of distributed-parameter systems governed by first order, quasilinear hyperbolic partial differential equations that arise in many physical systems. Such systems are characterized by time delays since information is transported from one state to another by wave propagation. A general closed-loop hyperbolic transport model is controlled by a boundary control embedded in a periodic boundary condition. The boundary control is subject to a nonlinear differential equation constraint that models actuator dynamics of the system. The hyperbolic equation is thus coupled with the ordinary differential equation via the boundary condition. Optimality of this coupled system is investigated using variational principles to seek an adjoint formulation of the optimal control problem. The results are then applied to implement a model predictive control design for a wind tunnel to eliminate a transport delay effect that causes a poor Mach number regulation.

  20. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.

  1. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    SciTech Connect

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.

  2. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    2016-01-01

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites.

  3. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    2016-01-01

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites. PMID:27438233

  4. Identifying the optimal spatially and temporally invariant root distribution for a semiarid environment

    NASA Astrophysics Data System (ADS)

    Sivandran, Gajan; Bras, Rafael L.

    2012-12-01

    In semiarid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Vegetation roots have strong control over this partitioning, and assuming a static root profile, predetermine the manner in which this partitioning is undertaken.A coupled, dynamic vegetation and hydrologic model, tRIBS + VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point-scale simulations were carried out using two spatially and temporally invariant rooting schemes: uniform: a one-parameter model and logistic: a two-parameter model. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semiarid Walnut Gulch Experimental Watershed (WGEW) in Arizona. A series of simulations were undertaken exploring the parameter space of both rooting schemes and the optimal root distribution for the simulation, which was defined as the root distribution with the maximum mean transpiration over a 100-yr period, and this was identified. This optimal root profile was determined for five generic soil textures and two plant-functional types (PFTs) to illustrate the role of soil texture on the partitioning of moisture at the land surface. The simulation results illustrate the strong control soil texture has on the partitioning of rainfall and consequently the depth of the optimal rooting profile. High-conductivity soils resulted in the deepest optimal rooting profile with land surface moisture fluxes dominated by transpiration. As we move toward the lower conductivity end of the soil spectrum, a shallowing of the optimal rooting profile is observed and evaporation gradually becomes the dominate flux from the land surface. This study offers a methodology through which local plant, soil, and climate can be

  5. Design and optimization of a material property distribution in a composite flywheel

    NASA Astrophysics Data System (ADS)

    Thielman, Scott Craig

    The material properties of a fiber reinforced plastic laminate can be tailored for a given structure and loading by continuously varying the direction of the fiber through-out the plies. Here, it is shown that adding such a material property distribution to a thick-radius, composite flywheel can improve performance. A flywheel made from alternating plies of purely circumferential and purely radial reinforcement is designed as the performance benchmark. A second flywheel, substituting plies with a continuous fiber angle variation for the purely radial plies, is investigated. It is shown that the design of the fiber angle distribution can be formulated as an optimal control problem incorporating Classical Lamination Theory to describe the constitutive behavior and the Tsai-Wu failure criteria to predict failure of the flywheel laminate. The effects of the matrix properties on performance are also investigated. Numerical simulation indicates a 13% increase in energy density for the optimized flywheel over the benchmark flywheel. To demonstrate the feasibility of manufacture, automated ply layup machines are developed that are capable of producing the necessary carbon fiber plies. Experimentally determined material properties are used to re-run the optimization routine then prototype benchmark and optimized flywheel are constructed. Tangential strain measurements confirm that the separate flywheels have different material properties suggestive of those found in the analysis.

  6. Collimator angle influence on dose distribution optimization for vertebral metastases using volumetric modulated arc therapy

    SciTech Connect

    Mancosu, Pietro; Cozzi, Luca; Fogliata, Antonella; Lattuada, Paola; Reggiori, Giacomo; Cantone, Marie Claire; Navarria, Pierina; Scorsetti, Marta

    2010-08-15

    Purpose: The cylindrical symmetry of vertebrae favors the use of volumetric modulated arc therapy in generating a dose ''hole'' on the center of the vertebrae limiting the dose to the spinal cord. The authors have evaluated if collimator angle is a significant parameter for dose distribution optimization in vertebral metastases. Methods: Three patients with one-three vertebrae involved were considered. Twenty-one differently optimized plans (nine single-arc and 12 double-arc plans) were performed, testing various collimator angle positions. Clinical target volume was defined as the whole vertebrae, excluding the spinal cord canal. The planning target volume (PTV) was defined as CTV+5 mm. Dose prescription was 5x4 Gy{sup 2} with normalization to PTV mean dose. The dose at 1 cm{sup 3} of spinal cord was limited to 11.5Gy. Results: The best plans in terms of target coverage and spinal cord sparing were achieved by two arcs and Arc1-80 deg. and Arc2-280 deg. collimator angles for all the cases considered (i.e., leaf travel parallel to the spinal cord primary orientation). If one arc is used, only 80 deg. reached the objectives. Conclusions: This study demonstrated the role of collimation rotation for the vertebrae metastasis irradiation, with the leaf travel parallel to the spinal cord primary orientation to be better than other solutions. Thus, optimal choice of collimator angle increases the optimization freedom to shape a desired dose distribution.

  7. Calculation of a double reactive azeotrope using stochastic optimization approaches

    NASA Astrophysics Data System (ADS)

    Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus

    2013-02-01

    An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.

  8. Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.

    PubMed

    Ghosh, Subhojit; Samanta, Susovon

    2014-07-01

    This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter.

  9. Fixed structure compensator design using a constrained hybrid evolutionary optimization approach.

    PubMed

    Ghosh, Subhojit; Samanta, Susovon

    2014-07-01

    This paper presents an efficient technique for designing a fixed order compensator for compensating current mode control architecture of DC-DC converters. The compensator design is formulated as an optimization problem, which seeks to attain a set of frequency domain specifications. The highly nonlinear nature of the optimization problem demands the use of an initial parameterization independent global search technique. In this regard, the optimization problem is solved using a hybrid evolutionary optimization approach, because of its simple structure, faster execution time and greater probability in achieving the global solution. The proposed algorithm involves the combination of a population search based optimization approach i.e. Particle Swarm Optimization (PSO) and local search based method. The op-amp dynamics have been incorporated during the design process. Considering the limitations of fixed structure compensator in achieving loop bandwidth higher than a certain threshold, the proposed approach also determines the op-amp bandwidth, which would be able to achieve the same. The effectiveness of the proposed approach in meeting the desired frequency domain specifications is experimentally tested on a peak current mode control dc-dc buck converter. PMID:24768082

  10. Optimal management of stationary lithium-ion battery system in electricity distribution grids

    NASA Astrophysics Data System (ADS)

    Purvins, Arturs; Sumner, Mark

    2013-11-01

    The present article proposes an optimal battery system management model in distribution grids for stationary applications. The main purpose of the management model is to maximise the utilisation of distributed renewable energy resources in distribution grids, preventing situations of reverse power flow in the distribution transformer. Secondly, battery management ensures efficient battery utilisation: charging at off-peak prices and discharging at peak prices when possible. This gives the battery system a shorter payback time. Management of the system requires predictions of residual distribution grid demand (i.e. demand minus renewable energy generation) and electricity price curves (e.g. for 24 h in advance). Results of a hypothetical study in Great Britain in 2020 show that the battery can contribute significantly to storing renewable energy surplus in distribution grids while being highly utilised. In a distribution grid with 25 households and an installed 8.9 kW wind turbine, a battery system with rated power of 8.9 kW and battery capacity of 100 kWh can store 7 MWh of 8 MWh wind energy surplus annually. Annual battery utilisation reaches 235 cycles in per unit values, where one unit is a full charge-depleting cycle depth of a new battery (80% of 100 kWh).

  11. A methodological integrated approach to optimize a hydrogeological engineering work

    NASA Astrophysics Data System (ADS)

    Loperte, A.; Satriani, A.; Bavusi, M.; Cerverizzo, G.

    2012-04-01

    The geoelectrical survey applied to hydraulic engineering is a well known in literature. However, despite of its large number of successful cases of application, the use of geophysics is still often not considered; this due to different reasons as: the poor knowledge of the potential performances; the difficulties in the practical implementation; the cost limitations. In this work, an integrated study of non-invasive (geoelectrical) and direct surveys is described, aimed at identifying a subsoil foundation where it possible to set up a watertight concrete structure able to protect the purifier of Senise, a little town in Basilicata Region (Southern Italy). The purifier, used by several villages, is located in a particularly dangerous hydrogeological position, as it is very close to the Sinni river, which has been obstructed from many years by the Monte Cotugno dam. During the rainiest periods, the river could flood the purifier, causing the drainage of waste waters in the Monte Cotugno artificial lake. The purifier is located in Pliocene- Calabrian clay and clay - marly formations covered by about 10m layer of alluvional gravelly-sandy materials carried by the Sinni river. The electrical resistivity tomography acquired with the Wenner Schlumberger array was revealed meaningful for the purpose to identify the potential depth of impermeable clays with high accuracy. In particular, the geoelectrical acquisition, orientated along the long side of purifier, was carried out using a multielectrodes system with 48 electrodes 2 m spaced leading to an achievable investigation depth of about 15 m The subsequent direct surveys have confirmed this depth so that it was possible to set up the foundation concrete structure with precision to protect the purifier. It is worth noting that the use of this methodological approach has allowed a remarkable economic saving as it has made it possible to correct the wrong information, regarding the depth of impermeably clays, previously

  12. Distributed Leadership of School Curriculum Change: An Integrative Approach

    ERIC Educational Resources Information Center

    Fasso, Wendy; Knight, Bruce Allen; Purnell, Ken

    2016-01-01

    Since its inception in 1999, the distributed leadership framework of Spillane, Halverson, and Diamond [2004. "Towards a Theory of Leadership Practice: A Distributed Perspective." "Journal of Curriculum Studies" 36 (1): 3-34. doi:10.1080/0022027032000106726] has supported research into leadership and change in schools. Whilst…

  13. Analysis and optimization of a solar thermal power generation and desalination system using a novel approach

    NASA Astrophysics Data System (ADS)

    Torres, Leovigildo

    Using a novel approach for a Photovoltaic-Thermal (PV-T) panel system, analytical and optimization analyses were performed for electricity generation as well as desalinated water production. The PV-T panel was design with a channel under it where seawater would be housed at a constant pressure of 2.89 psia and ambient temperature of 520°R. The surface of the PV panel was modeled by a high absorption black chrome surface. Irradiation flux on the surface and the heat addition on the saltwater were calculated hourly between 9:00am and 6:00pm. At steady state conditions, the saturation temperature of 600°R was limited at PV tank-channel outlet and the evaporation rate was measured to be 2.53 lbm/hr-ft2. The desorbed air then passed through a turbine, where it generated electrical power at 0.84 Btu/hr, condensing into desalinated water at the outlet. Optimization was performed for max capacity yield based on available temperature distribution of 600°R to 1050°R at PV tank-channel outlet. This gave an energy generation range for the turbine of 0.84 Btu/hr to 3.84 Btu/hr, while the desalinated water production range was 2.53 lbm/hr-ft2 to 10.65 lbm/hr-ft2. System efficiency was found to be between 7.5% to 24.3%. Water production efficiency was found to be 40% to 43%.

  14. Optimizing radioimmunotherapy by matching dose distribution with tumor structure using 3D reconstructions of serial images.

    PubMed

    Flynn, A A; Pedley, R B; Green, A J; Boxer, G M; Boden, R; Begent, R H

    2001-10-01

    The biological effect of radioimmunotherapy (RIT) is most commonly assessed in terms of the absorbed radiation dose. In tumor, conventional dosimetry methods assume a uniform radionuclide and calculate a mean dose throughout the tumor. However, the vasculature of solid tumors tends to be highly irregular and the systemic delivery of antibodies is therefore heterogeneous. Tumor-specific antibodies preferentially localize in the viable, radiosensitive parts of the tumor whereas non-specific antibodies can penetrate into the necrosis where the dose is wasted. As a result, the observed biological effect can be very different to the predicted effect from conventional dose estimates. The purpose of this study is to assess the potential for optimizing the biological effect of RIT by matching the dose-distribution with tumor structure through the selection of appropriate antibodies and radionuclides. Storage phosphor plate technology was used to acquire images of the antibody distribution in serial tumor sections. Images of the distributions of a trivalent (TFM), bivalent (A5B7-IgG), monovalent (MFE-23) and a non-specific antibody (MOPC) were obtained. These images were registered with corresponding images showing tumor morphology. Serial images were reconstructed to form 3D maps of the antibody distribution and tumor structure. Convolution of the image of antibody distribution with beta dose point kernals generated dose-rate distributions for 14C, 131I and 90Y. These were statistically compared with the tumor structure. The highest correlation was obtained for the multivalent antibodies combined with 131I, due to specific retention in viable areas of tumor coupled with the fact that much of the dose was deposted locally. With decreasing avidity the correlation also decreased and with the non-specific antibody this correlation was negative, indicating higher concentrations in the necrotic regions. In conclusion, the dose distribution can be optimized in tumor by selecting

  15. The distribution of all French communes: A composite parametric approach

    NASA Astrophysics Data System (ADS)

    Calderín-Ojeda, Enrique

    2016-05-01

    The distribution of the size of all French settlements (communes) from 1962 to 2012 is examined by means of a three-parameter composite Lognormal-Pareto distribution. This model is based on a Lognormal density up to an unknown threshold value and a Pareto density thereafter. Recent findings have shown that the untruncated settlement size data is in excellent agreement with the Lognormal distribution in the lower and central parts of the empirical distribution, but it follows a power law in the upper tail. For that reason, this probabilistic family, that nests both models, seems appropriate to describe urban agglomeration in France. The outcomes of this paper reveal that for the early periods (1962-1975) the upper quartile of the commune size data adheres closely to a power law distribution, whereas for later periods (2006-2012) most of the city size dynamics is explained by a Lognormal model.

  16. A Graph-Based Ant Colony Optimization Approach for Process Planning

    PubMed Central

    Wang, JinFeng; Fan, XiaoLiang; Wan, Shuting

    2014-01-01

    The complex process planning problem is modeled as a combinatorial optimization problem with constraints in this paper. An ant colony optimization (ACO) approach has been developed to deal with process planning problem by simultaneously considering activities such as sequencing operations, selecting manufacturing resources, and determining setup plans to achieve the optimal process plan. A weighted directed graph is conducted to describe the operations, precedence constraints between operations, and the possible visited path between operation nodes. A representation of process plan is described based on the weighted directed graph. Ant colony goes through the necessary nodes on the graph to achieve the optimal solution with the objective of minimizing total production costs (TPC). Two cases have been carried out to study the influence of various parameters of ACO on the system performance. Extensive comparative experiments have been conducted to demonstrate the feasibility and efficiency of the proposed approach. PMID:24995355

  17. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    NASA Astrophysics Data System (ADS)

    Castellini, P.; Cecchini, S.; Stroppa, L.; Paone, N.

    2015-02-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes.

  18. Quantification of submarine groundwater discharge and optimal radium sampling distribution in the Lesina Lagoon, Italy

    NASA Astrophysics Data System (ADS)

    Rapaglia, John; Koukoulas, Sotirios; Zaggia, Luca; Lichter, Michal; Manfé, Giorgia; Vafeidis, Athanasios T.

    2012-03-01

    Performing a mass balance of radium isotopes is a commonly employed method for quantifying the flux of groundwater into the sea. However, the spatial variability of 224Ra can compromise the results of mass balances in environmental studies. We address this uncertainty by optimizing the distribution of Ra samples within a surface survey of 224Ra activity in the Lesina Lagoon, Italy. After checking for spatial dependence, location-allocation modeling (LAM) was utilized to determine optimal distribution of samples for thinning the sampling design. Trend surface analysis (TSA) was employed to interpolate the Ra activity throughout the lagoon. No significant change was found when using all 41 samples or only 25 randomly distributed samples. Results from the TSA showed a linear trend and bi-modal distribution in surface 224Ra. This information was utilized to perform mass balances in two separate basins (east and west). SGD was found to be significantly higher in the western basin (4.8 vs. 0.7 cm d - 1 ). Additionally, mass balances were performed using the average 224Ra activity from the trend surface analysis calculated with 41 and 25 samples respectively and total lagoon SGD was found to be 10.4-10.5 m 3 s - 1 . Results show that SGD is significant in the Lesina Lagoon.

  19. An approach to distributed execution of Ada programs

    NASA Technical Reports Server (NTRS)

    Volz, R. A.; Krishnan, P.; Theriault, R.

    1987-01-01

    Intelligent control of the Space Station will require the coordinated execution of computer programs across a substantial number of computing elements. It will be important to develop large subsets of these programs in the form of a single program which executes in a distributed fashion across a number of processors. A translation strategy for distributed execution of Ada programs in which library packages and subprograms may be distributed is described. A preliminary version of the translator is operational. Simple data objects (no records or arrays as yet), subprograms, and static tasks may be referenced remotely.

  20. Minimization of Blast furnace Fuel Rate by Optimizing Burden and Gas Distribution

    SciTech Connect

    Dr. Chenn Zhou

    2012-08-15

    The goal of the research is to improve the competitive edge of steel mills by using the advanced CFD technology to optimize the gas and burden distributions inside a blast furnace for achieving the best gas utilization. A state-of-the-art 3-D CFD model has been developed for simulating the gas distribution inside a blast furnace at given burden conditions, burden distribution and blast parameters. The comprehensive 3-D CFD model has been validated by plant measurement data from an actual blast furnace. Validation of the sub-models is also achieved. The user friendly software package named Blast Furnace Shaft Simulator (BFSS) has been developed to simulate the blast furnace shaft process. The research has significant benefits to the steel industry with high productivity, low energy consumption, and improved environment.

  1. An Efficient Approach to Obtain Optimal Load Factors for Structural Design

    PubMed Central

    Bojórquez, Juan

    2014-01-01

    An efficient optimization approach is described to calibrate load factors used for designing of structures. The load factors are calibrated so that the structural reliability index is as close as possible to a target reliability value. The optimization procedure is applied to find optimal load factors for designing of structures in accordance with the new version of the Mexico City Building Code (RCDF). For this aim, the combination of factors corresponding to dead load plus live load is considered. The optimal combination is based on a parametric numerical analysis of several reinforced concrete elements, which are designed using different load factor values. The Monte Carlo simulation technique is used. The formulation is applied to different failure modes: flexure, shear, torsion, and compression plus bending of short and slender reinforced concrete elements. Finally, the structural reliability corresponding to the optimal load combination proposed here is compared with that corresponding to the load combination recommended by the current Mexico City Building Code. PMID:25133232

  2. Optimizing water supply and hydropower reservoir operation rule curves: An imperialist competitive algorithm approach

    NASA Astrophysics Data System (ADS)

    Afshar, Abbas; Emami Skardi, Mohammad J.; Masoumi, Fariborz

    2015-09-01

    Efficient reservoir management requires the implementation of generalized optimal operating policies that manage storage volumes and releases while optimizing a single objective or multiple objectives. Reservoir operating rules stipulate the actions that should be taken under the current state of the system. This study develops a set of piecewise linear operating rule curves for water supply and hydropower reservoirs, employing an imperialist competitive algorithm in a parameterization-simulation-optimization approach. The adaptive penalty method is used for constraint handling and proved to work efficiently in the proposed scheme. Its performance is tested deriving an operation rule for the Dez reservoir in Iran. The proposed modelling scheme converged to near-optimal solutions efficiently in the case examples. It was shown that the proposed optimum piecewise linear rule may perform quite well in reservoir operation optimization as the operating period extends from very short to fairly long periods.

  3. Electric power scheduling - A distributed problem-solving approach

    NASA Technical Reports Server (NTRS)

    Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.

    1990-01-01

    Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity.

  4. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  5. An optimization approach for design of RC beams subjected to flexural and shear effects

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2013-10-01

    A random search technique (RST) is proposed for the optimum design of reinforced concrete (RC) beams with minimum material cost. Cross-sectional dimensions and reinforcement bars are optimized for different flexural moments and shear forces. The optimization of reinforcement bars includes number and diameter of longitudinal bars for flexural moments. Also, stirrup reinforcements are designed for shear forces. The optimization is performed according to design procedure given in ACI-318 (Building Code Requirements for Structural Concrete). The approach is effective for the detailed design of RC beams ensuring safety and application conditions.

  6. Improving flash flood forecasting with distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yangbo

    2016-04-01

    In China, flash food is usually regarded as flood occured in small and medium sized watersheds with drainage area less than 200 km2, and is mainly induced by heavy rains, and occurs in where hydrological observation is lacked. Flash flood is widely observed in China, and is the flood causing the most casualties nowadays in China. Due to hydrological data scarcity, lumped hydrological model is difficult to be employed for flash flood forecasting which requires lots of observed hydrological data to calibrate model parameters. Physically based distributed hydrological model discrete the terrain of the whole watershed into a number of grid cells at fine resolution, assimilate different terrain data and precipitation to different cells, and derive model parameteris from the terrain properties, thus having the potential to be used in flash flood forecasting and improving flash flood prediction capability. In this study, the Liuxihe Model, a physically based distributed hydrological model mainly proposed for watershed flood forecasting is employed to simulate flash floods in the Ganzhou area in southeast China, and models have been set up in 5 watersheds. Model parameters have been derived from the terrain properties including the DEM, the soil type and land use type, but the result shows that the flood simulation uncertainty is high, which may be caused by parameter uncertainty, and some kind of uncertainty control is needed before the model could be used in real-time flash flood forecastin. Considering currently many Chinese small and medium sized watersheds has set up hydrological observation network, and a few flood events could be collected, it may be used for model parameter optimization. For this reason, an automatic model parameter optimization algorithm using Particle Swam Optimization(PSO) is developed to optimize the model parameters, and it has been found that model parameters optimized even only with one observed flood events could largely reduce the flood

  7. A scalar optimization approach for averaged Hausdorff approximations of the Pareto front

    NASA Astrophysics Data System (ADS)

    Schütze, Oliver; Domínguez-Medina, Christian; Cruz-Cortés, Nareli; Gerardo de la Fraga, Luis; Sun, Jian-Qiao; Toscano, Gregorio; Landa, Ricardo

    2016-09-01

    This article presents a novel method to compute averaged Hausdorff (?) approximations of the Pareto fronts of multi-objective optimization problems. The underlying idea is to utilize directly the scalar optimization problem that is induced by the ? performance indicator. This method can be viewed as a certain set based scalarization approach and can be addressed both by mathematical programming techniques and evolutionary algorithms (EAs). In this work, the focus is on the latter where a first single objective EA for such ? approximations is proposed. Finally, the strength of the novel approach is demonstrated on some bi-objective benchmark problems with different shapes of the Pareto front.

  8. H.264/SVC parameter optimization based on quantization parameter, MGS fragmentation, and user bandwidth distribution

    NASA Astrophysics Data System (ADS)

    Chen, Xu; Zhang, Ji-Hong; Liu, Wei; Liang, Yong-Sheng; Feng, Ji-Qiang

    2013-12-01

    In the situation of limited bandwidth, how to improve the performance of scalable video coding plays an important role in video coding. The previously proposed scalable video coding optimization schemes concentrate on reducing coding computation or trying to achieve consistent video quality; however, the connections between coding scheme, transmission environments, and users' accesses manner were not jointly considered. This article proposes a H.264/SVC (scalable video codec) parameter optimization scheme, which attempt to make full use of limited bandwidth, to achieve better peak signal-to-noise ratio, based on the joint measure of user bandwidth range and probability density distribution. This algorithm constructs a relationship map which consists of the bandwidth range of multiple users and the quantified quality increments measure, QP e , in order to make effective use of the video coding bit-stream. A medium grain scalability fragmentation optimization algorithm is also presented with respect to user bandwidth probability density distribution, encoding bit rate, and scalability. Experiments on a public dataset show that this method provides significant average quality improvement for streaming video applications.

  9. An endmember optimization approach for linear spectral unmixing of fine-scale urban imagery

    NASA Astrophysics Data System (ADS)

    Yang, Jian; He, Yuhong; Oguchi, Takashi

    2014-04-01

    Spectral unmixing of high spatial resolution imagery has attracted growing interest for interpreting urban surface material characteristics. This study proposes an endmember optimization method based on endmember spatial distribution (i.e. solid angle and tetrahedron volume) to select the optimal endmember combination for urban spectral unmixing. Specifically, a linear spectral unmixing model (SESMA) is implemented in a suitable 3-D spectral space structured by the green, red and near infrared bands of the imagery, and endmember spatial distribution is measured with solid angle and tetrahedron volume. Both the solid angle and tetrahedron volume are found to have a strong linear or logarithmic relationship with valid and correct unmixed proportions, whereas the latter measure also takes the photometric shade into account as an endmember. The spectral unmixing results based on the proposed endmember optimization method are compared with those from a common multiple endmember spectral mixture analysis (MESMA) model. Towards different classes, each model has its own advantages over the other.

  10. A Direct Approach for Minimum Fuel Maneuvers of Distributed Spacecraft in Multiple Flight Regimes

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P; Cooley, D. S.; Guzman, Jose J.

    2004-01-01

    In this work we present a method to solve the impulsive minimum fuel maneuver problem for a distributed set of spacecraft. We develop the method assuming a fully non-linear dynamics model and parameterize the problem to allow the method to be applicable to any flight regime. Furthermore, the approach is not limited by the inter-spacecraft separation distances and is applicable to both small formations as well as constellations. We assume that the desired relative motion is driven by mission requirements and has been determined a-priori. The goal of this work is to develop a technique to achieve the desired relative motion in a minimum fuel manner. To permit applicability to multiple flight regimes, we have chosen to parameterize the cost function in terms of the maneuver times expressed in a useful time system and the maneuver locations expressed in their Cartesian vector representations. We also include as an independent variable the initial reference orbit to solve for the optimal injection orbit to minimize and equalize the fuel expenditure of distributed sets of spacecraft with large inter-spacecraft separations. In this work we derive the derivatives of the cost and constraints with respect to all of the independent variables.

  11. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  12. A Complex Network Approach to Distributional Semantic Models

    PubMed Central

    Utsumi, Akira

    2015-01-01

    A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models. PMID:26295940

  13. Electric power scheduling: A distributed problem-solving approach

    NASA Technical Reports Server (NTRS)

    Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.

    1990-01-01

    Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity. The value-driven free-market economic model is such a tool.

  14. Supervisor Localization: A Top-Down Approach to Distributed Control of Discrete-Event Systems

    SciTech Connect

    Cai, K.; Wonham, W. M.

    2009-03-05

    A purely distributed control paradigm is proposed for discrete-event systems (DES). In contrast to control by one or more external supervisors, distributed control aims to design built-in strategies for individual agents. First a distributed optimal nonblocking control problem is formulated. To solve it, a top-down localization procedure is developed which systematically decomposes an external supervisor into local controllers while preserving optimality and nonblockingness. An efficient localization algorithm is provided to carry out the computation, and an automated guided vehicles (AGV) example presented for illustration. Finally, the 'easiest' and 'hardest' boundary cases of localization are discussed.

  15. Supervisor Localization: A Top-Down Approach to Distributed Control of Discrete-Event Systems

    NASA Astrophysics Data System (ADS)

    Cai, K.; Wonham, W. M.

    2009-03-01

    A purely distributed control paradigm is proposed for discrete-event systems (DES). In contrast to control by one or more external supervisors, distributed control aims to design built-in strategies for individual agents. First a distributed optimal nonblocking control problem is formulated. To solve it, a top-down localization procedure is developed which systematically decomposes an external supervisor into local controllers while preserving optimality and nonblockingness. An efficient localization algorithm is provided to carry out the computation, and an automated guided vehicles (AGV) example presented for illustration. Finally, the 'easiest' and 'hardest' boundary cases of localization are discussed.

  16. Study and optimization of gas flow and temperature distribution in a Czochralski configuration

    NASA Astrophysics Data System (ADS)

    Fang, H. S.; Jin, Z. L.; Huang, X. M.

    2012-12-01

    The Czochralski (Cz) method has virtually dominated the entire production of bulk single crystals with high productivity. Since the Cz-grown crystals are cylindrical, axisymmetric hot zone arrangement is required for an ideally high-quality crystal growth. However, due to three-dimensional effects the flow pattern and temperature field are inevitably non-axisymmetric. The grown crystal suffers from many defects, among which macro-cracks and micro-dislocation are mainly related to inhomogeneous temperature distribution during the growth and cooling processes. The task of the paper is to investigate gas partition and temperature distribution in a Cz configuration, and to optimize the furnace design for the reduction of the three-dimensional effects. The general design is found to be unfavorable to obtain the desired temperature conditions. Several different types of the furnace designs, modified at the top part of the side insulation, are proposed for a comparative analysis. The optimized one is chosen for further study, and the results display the excellence of the proposed design in suppression of three-dimensional effects to achieve relatively axisymmetric flow pattern and temperature distribution for the possible minimization of thermal stress related crystal defects.

  17. Parallel multi-join query optimization algorithm for distributed sensor network in the internet of things

    NASA Astrophysics Data System (ADS)

    Zheng, Yan

    2015-03-01

    Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.

  18. Geometry Design Optimization of Functionally Graded Scaffolds for Bone Tissue Engineering: A Mechanobiological Approach

    PubMed Central

    Boccaccio, Antonio; Uva, Antonio Emmanuele; Fiorentino, Michele; Mori, Giorgio; Monno, Giuseppe

    2016-01-01

    Functionally Graded Scaffolds (FGSs) are porous biomaterials where porosity changes in space with a specific gradient. In spite of their wide use in bone tissue engineering, possible models that relate the scaffold gradient to the mechanical and biological requirements for the regeneration of the bony tissue are currently missing. In this study we attempt to bridge the gap by developing a mechanobiology-based optimization algorithm aimed to determine the optimal graded porosity distribution in FGSs. The algorithm combines the parametric finite element model of a FGS, a computational mechano-regulation model and a numerical optimization routine. For assigned boundary and loading conditions, the algorithm builds iteratively different scaffold geometry configurations with different porosity distributions until the best microstructure geometry is reached, i.e. the geometry that allows the amount of bone formation to be maximized. We tested different porosity distribution laws, loading conditions and scaffold Young’s modulus values. For each combination of these variables, the explicit equation of the porosity distribution law–i.e the law that describes the pore dimensions in function of the spatial coordinates–was determined that allows the highest amounts of bone to be generated. The results show that the loading conditions affect significantly the optimal porosity distribution. For a pure compression loading, it was found that the pore dimensions are almost constant throughout the entire scaffold and using a FGS allows the formation of amounts of bone slightly larger than those obtainable with a homogeneous porosity scaffold. For a pure shear loading, instead, FGSs allow to significantly increase the bone formation compared to a homogeneous porosity scaffolds. Although experimental data is still necessary to properly relate the mechanical/biological environment to the scaffold microstructure, this model represents an important step towards optimizing geometry

  19. A Generalized Hopfield Network for Nonsmooth Constrained Convex Optimization: Lie Derivative Approach.

    PubMed

    Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; Chen, Guo; He, Xing

    2016-02-01

    This paper proposes a generalized Hopfield network for solving general constrained convex optimization problems. First, the existence and the uniqueness of solutions to the generalized Hopfield network in the Filippov sense are proved. Then, the Lie derivative is introduced to analyze the stability of the network using a differential inclusion. The optimality of the solution to the nonsmooth constrained optimization problems is shown to be guaranteed by the enhanced Fritz John conditions. The convergence rate of the generalized Hopfield network can be estimated by the second-order derivative of the energy function. The effectiveness of the proposed network is evaluated on several typical nonsmooth optimization problems and used to solve the hierarchical and distributed model predictive control four-tank benchmark.

  20. Optimization of a Distributed Genetic Algorithm on a Cluster of Workstations for the Detection of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Bevilacqua, A.; Campanini, R.; Lanconelli, N.

    We have developed a method for the detection of clusters of microcalcifications in digital mammograms. Here, we present a genetic algorithm used to optimize the choice of the parameters in the detection scheme. The optimization has allowed the improvement of the performance, the detailed study of the influence of the various parameters on the performance and an accurate investigation of the behavior of the detection method on unknown cases. We reach a sensitivity of 96.2% with 0.7 false positive clusters per image on the Nijmegen database; we are also able to identify the most significant parameters. In addition, we have examined the feasibility of a distributed genetic algorithm implemented on a non-dedicated Cluster Of Workstations. We get very good results both in terms of quality and efficiency.

  1. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  2. Comparison of joint space versus task force load distribution optimization for a multiarm manipulator system

    NASA Technical Reports Server (NTRS)

    Soloway, Donald I.; Alberts, Thomas E.

    1989-01-01

    It is often proposed that the redundancy in choosing a force distribution for multiple arms grasping a single object should be handled by minimizing a quadratic performance index. The performance index may be formulated in terms of joint torques or in terms of the Cartesian space force/torque applied to the body by the grippers. The former seeks to minimize power consumption while the latter minimizes body stresses. Because the cost functions are related to each other by a joint angle dependent transformation on the weight matrix, it might be argued that either method tends to reduce power consumption, but clearly the joint space minimization is optimal. A comparison of these two options is presented with consideration given to computational cost and power consumption. Simulation results using a two arm robot system are presented to show the savings realized by employing the joint space optimization. These savings are offset by additional complexity, computation time and in some cases processor power consumption.

  3. A complex-valued neural dynamical optimization approach and its stability analysis.

    PubMed

    Zhang, Songchuan; Xia, Youshen; Zheng, Weixing

    2015-01-01

    In this paper, we propose a complex-valued neural dynamical method for solving a complex-valued nonlinear convex programming problem. Theoretically, we prove that the proposed complex-valued neural dynamical approach is globally stable and convergent to the optimal solution. The proposed neural dynamical approach significantly generalizes the real-valued nonlinear Lagrange network completely in the complex domain. Compared with existing real-valued neural networks and numerical optimization methods for solving complex-valued quadratic convex programming problems, the proposed complex-valued neural dynamical approach can avoid redundant computation in a double real-valued space and thus has a low model complexity and storage capacity. Numerical simulations are presented to show the effectiveness of the proposed complex-valued neural dynamical approach.

  4. A method to optimize sampling locations for measuring indoor air distributions

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan

    2015-02-01

    Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.

  5. Optimal reconstruction of historical water supply to a distribution system: A. Methodology.

    PubMed

    Aral, M M; Guan, J; Maslia, M L; Sautner, J B; Gillig, R E; Reyes, J J; Williams, R C

    2004-09-01

    The New Jersey Department of Health and Senior Services (NJDHSS), with support from the Agency for Toxic Substances and Disease Registry (ATSDR) conducted an epidemiological study of childhood leukaemia and nervous system cancers that occurred in the period 1979 through 1996 in Dover Township, Ocean County, New Jersey. The epidemiological study explored a wide variety of possible risk factors, including environmental exposures. ATSDR and NJDHSS determined that completed human exposure pathways to groundwater contaminants occurred in the past through private and community water supplies (i.e. the water distribution system serving the area). To investigate this exposure, a model of the water distribution system was developed and calibrated through an extensive field investigation. The components of this water distribution system, such as number of pipes, number of tanks, and number of supply wells in the network, changed significantly over a 35-year period (1962--1996), the time frame established for the epidemiological study. Data on the historical management of this system was limited. Thus, it was necessary to investigate alternative ways to reconstruct the operation of the system and test the sensitivity of the system to various alternative operations. Manual reconstruction of the historical water supply to the system in order to provide this sensitivity analysis was time-consuming and labour intensive, given the complexity of the system and the time constraints imposed on the study. To address these issues, the problem was formulated as an optimization problem, where it was assumed that the water distribution system was operated in an optimum manner at all times to satisfy the constraints in the system. The solution to the optimization problem provided the historical water supply strategy in a consistent manner for each month of the study period. The non-uniqueness of the selected historical water supply strategy was addressed by the formulation of a second

  6. An approach to distribution short-term load forecasting

    SciTech Connect

    Stratton, R.C.; Gaustad, K.L.

    1995-03-01

    This paper reports on the developments and findings of the Distribution Short-Term Load Forecaster (DSTLF) research activity. The objective of this research is to develop a distribution short-term load forecasting technology consisting of a forecasting method, development methodology, theories necessary to support required technical components, and the hardware and software tools required to perform the forecast The DSTLF consists of four major components: monitored endpoint load forecaster (MELF), nonmonitored endpoint load forecaster (NELF), topological integration forecaster (TIF), and a dynamic tuner. These components interact to provide short-term forecasts at various points in the, distribution system, eg., feeder, line section, and endpoint. This paper discusses the DSTLF methodology and MELF component MELF, based on artificial neural network technology, predicts distribution endpoint loads for an hour, a day, and a week in advance. Predictions are developed using time, calendar, historical load, and weather data. The overall DSTLF architecture and a prototype MELF module for retail endpoints have been developed. Future work will be focused on refining and extending MELF and developing NELF and TIF capabilities.

  7. Detection of cancerous masses in mammograms by template matching: optimization of template brightness distribution by means of evolutionary algorithm.

    PubMed

    Bator, Marcin; Nieniewski, Mariusz

    2012-02-01

    Optimization of brightness distribution in the template used for detection of cancerous masses in mammograms by means of correlation coefficient is presented. This optimization is performed by the evolutionary algorithm using an auxiliary mass classifier. Brightness along the radius of the circularly symmetric template is coded indirectly by its second derivative. The fitness function is defined as the area under curve (AUC) of the receiver operating characteristic (ROC) for the mass classifier. The ROC and AUC are obtained for a teaching set of regions of interest (ROIs), for which it is known whether a ROI is true-positive (TP) or false-positive (F). The teaching set is obtained by running the mass detector using a template with a predetermined brightness. Subsequently, the evolutionary algorithm optimizes the template by classifying masses in the teaching set. The optimal template (OT) can be used for detection of masses in mammograms with unknown ROIs. The approach was tested on the training and testing sets of the Digital Database for Screening Mammography (DDSM). The free-response receiver operating characteristic (FROC) obtained with the new mass detector seems superior to the FROC for the hemispherical template (HT). Exemplary results are the following: in the case of the training set in the DDSM, the true-positive fraction (TPF) = 0.82 for the OT and 0.79 for the HT; in the case of the testing set, TPF = 0.79 for the OT and 0.72 for the HT. These values were obtained for disease cases, and the false-positive per image (FPI) = 2.

  8. Assessing Impact of Large-Scale Distributed Residential HVAC Control Optimization on Electricity Grid Operation and Renewable Energy Integration

    NASA Astrophysics Data System (ADS)

    Corbin, Charles D.

    Demand management is an important component of the emerging Smart Grid, and a potential solution to the supply-demand imbalance occurring increasingly as intermittent renewable electricity is added to the generation mix. Model predictive control (MPC) has shown great promise for controlling HVAC demand in commercial buildings, making it an ideal solution to this problem. MPC is believed to hold similar promise for residential applications, yet very few examples exist in the literature despite a growing interest in residential demand management. This work explores the potential for residential buildings to shape electric demand at the distribution feeder level in order to reduce peak demand, reduce system ramping, and increase load factor using detailed sub-hourly simulations of thousands of buildings coupled to distribution power flow software. More generally, this work develops a methodology for the directed optimization of residential HVAC operation using a distributed but directed MPC scheme that can be applied to today's programmable thermostat technologies to address the increasing variability in electric supply and demand. Case studies incorporating varying levels of renewable energy generation demonstrate the approach and highlight important considerations for large-scale residential model predictive control.

  9. Efficient use of hybrid Genetic Algorithms in the gain optimization of distributed Raman amplifiers.

    PubMed

    Neto, B; Teixeira, A L J; Wada, N; André, P S

    2007-12-24

    In this paper, we propose an efficient and accurate method that combines the Genetic Algorithm (GA) with the Nelder-Mead method in order to obtain the gain optimization of distributed Raman amplifiers. By using these two methods together, the advantages of both are combined: the convergence of the GA and the high accuracy of the Nelder-Mead. To enhance the convergence of the GA, several features were examined and correlated with fitting errors. It is also shown that when the right moment to switch between methods is chosen, the computation time can be reduced by a factor of two.

  10. OPTIMAL SHRINKAGE ESTIMATION OF MEAN PARAMETERS IN FAMILY OF DISTRIBUTIONS WITH QUADRATIC VARIANCE

    PubMed Central

    Xie, Xianchao; Kou, S. C.; Brown, Lawrence

    2015-01-01

    This paper discusses the simultaneous inference of mean parameters in a family of distributions with quadratic variance function. We first introduce a class of semi-parametric/parametric shrinkage estimators and establish their asymptotic optimality properties. Two specific cases, the location-scale family and the natural exponential family with quadratic variance function, are then studied in detail. We conduct a comprehensive simulation study to compare the performance of the proposed methods with existing shrinkage estimators. We also apply the method to real data and obtain encouraging results. PMID:27041778

  11. Optimal Surface Segmentation in Volumetric Images—A Graph-Theoretic Approach

    PubMed Central

    Li, Kang; Wu, Xiaodong; Chen, Danny Z.; Sonka, Milan

    2008-01-01

    Efficient segmentation of globally optimal surfaces representing object boundaries in volumetric data sets is important and challenging in many medical image analysis applications. We have developed an optimal surface detection method capable of simultaneously detecting multiple interacting surfaces, in which the optimality is controlled by the cost functions designed for individual surfaces and by several geometric constraints defining the surface smoothness and interrelations. The method solves the surface segmentation problem by transforming it into computing a minimum s-t cut in a derived arc-weighted directed graph. The proposed algorithm has a low-order polynomial time complexity and is computationally efficient. It has been extensively validated on more than 300 computer-synthetic volumetric images, 72 CT-scanned data sets of different-sized plexiglas tubes, and tens of medical images spanning various imaging modalities. In all cases, the approach yielded highly accurate results. Our approach can be readily extended to higher-dimensional image segmentation. PMID:16402624

  12. A Distributed Approach Toward Discriminative Distance Metric Learning.

    PubMed

    Li, Jun; Lin, Xun; Rui, Xiaoguang; Rui, Yong; Tao, Dacheng

    2015-09-01

    Distance metric learning (DML) is successful in discovering intrinsic relations in data. However, most algorithms are computationally demanding when the problem size becomes large. In this paper, we propose a discriminative metric learning algorithm, develop a distributed scheme learning metrics on moderate-sized subsets of data, and aggregate the results into a global solution. The technique leverages the power of parallel computation. The algorithm of the aggregated DML (ADML) scales well with the data size and can be controlled by the partition. We theoretically analyze and provide bounds for the error induced by the distributed treatment. We have conducted experimental evaluation of the ADML, both on specially designed tests and on practical image annotation tasks. Those tests have shown that the ADML achieves the state-of-the-art performance at only a fraction of the cost incurred by most existing methods.

  13. A Step-Wise Approach to Elicit Triangular Distributions

    NASA Technical Reports Server (NTRS)

    Greenberg, Marc W.

    2013-01-01

    Adapt/combine known methods to demonstrate an expert judgment elicitation process that: 1.Models expert's inputs as a triangular distribution, 2.Incorporates techniques to account for expert bias and 3.Is structured in a way to help justify expert's inputs. This paper will show one way of "extracting" expert opinion for estimating purposes. Nevertheless, as with most subjective methods, there are many ways to do this.

  14. Optimal epidemic spreading on complex networks with heterogeneous waiting time distribution

    NASA Astrophysics Data System (ADS)

    Yang, Guan-Ling; Yang, Xinsong

    2016-04-01

    In this paper, the effects of heterogeneous waiting time on spreading dynamics is studied based on network-dependent information. A new non-Markovian susceptible-infected-susceptible (SIS) model is first proposed, in which node's waiting time is dependent on its degree and may be different from each other. Every node tries to transmit the epidemic to its neighbors after the waiting time. Moreover, by using the mean-field theory and numerical simulations, it is discovered that the epidemic threshold is correlated to the network topology and the distribution of the waiting time. Furthermore, our results reveal that an optimal distribution of the heterogeneous waiting time can suppress the epidemic spreading.

  15. Optimal design of water distribution networks by a discrete state transition algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaojun; Gao, David Y.; Simpson, Angus R.

    2016-04-01

    In this study it is demonstrated that, with respect to model formulation, the number of linear and nonlinear equations involved in water distribution networks can be reduced to the number of closed simple loops. Regarding the optimization technique, a discrete state transition algorithm (STA) is introduced to solve several cases of water distribution networks. Firstly, the focus is on a parametric study of the 'restoration probability and risk probability' in the dynamic STA. To deal effectively with head pressure constraints, the influence is then investigated of the penalty coefficient and search enforcement on the performance of the algorithm. Based on the experience gained from training the Two-Loop network problem, a discrete STA has successfully achieved the best known solutions for the Hanoi, triple Hanoi and New York network problems.

  16. Localization of WSN using Distributed Particle Swarm Optimization algorithm with precise references

    NASA Astrophysics Data System (ADS)

    Janapati, Ravi Chander; Balaswamy, Ch.; Soundararajan, K.

    2016-08-01

    Localization is the key research area in Wireless Sensor Networks. Finding the exact position of the node is known as localization. Different algorithms have been proposed. Here we consider a cooperative localization algorithm with censoring schemes using Crammer Rao Bound (CRB). This censoring scheme can improve the positioning accuracy and reduces computation complexity, traffic and latency. Particle swarm optimization (PSO) is a population based search algorithm based on the swarm intelligence like social behavior of birds, bees or a school of fishes. To improve the algorithm efficiency and localization precision, this paper presents an objective function based on the normal distribution of ranging error and a method of obtaining the search space of particles. In this paper Distributed localization algorithm PSO with CRB is proposed. Proposed method shows better results in terms of position accuracy, latency and complexity.

  17. MonALISA: An agent based, dynamic service system to monitor, control and optimize distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Cirstoiu, C.; Grigoras, C.; Dobre, C.; Muraru, A.; Costan, A.; Dediu, M.; Stratan, C.

    2009-12-01

    The MonALISA (Monitoring Agents in a Large Integrated Services Architecture) framework provides a set of distributed services for monitoring, control, management and global optimization for large scale distributed systems. It is based on an ensemble of autonomous, multi-threaded, agent-based subsystems which are registered as dynamic services. They can be automatically discovered and used by other services or clients. The distributed agents can collaborate and cooperate in performing a wide range of management, control and global optimization tasks using real time monitoring information. Program summaryProgram title: MonALISA Catalogue identifier: AEEZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Caltech License - free for all non-commercial activities No. of lines in distributed program, including test data, etc.: 147 802 No. of bytes in distributed program, including test data, etc.: 2 5913 689 Distribution format: tar.gz Programming language: Java, additional APIs available in Java, C, C++, Perl and python Computer: Computing Clusters, Network Devices, Storage Systems, Large scale data intensive applications Operating system: The MonALISA service is mainly used in Linux, the MonALISA client runs on all major platforms (Windows, Linux, Solaris, MacOS). Has the code been vectorized or parallelized?: It is a multithreaded application. It will efficiently use all the available processors. RAM: for the MonALISA service the minimum required memory is 64 MB; if the JVM is started allocating more memory this will be used for internal caching. The MonALISA client requires typically 256-512 MB of memory. Classification: 6.5 External routines: Requires Java: JRE or JDK to run. These external packages are used (they are included in the distribution): JINI, JFreeChart, PostgreSQL (optional). Nature of problem: To monitor and control

  18. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    SciTech Connect

    Hutchinson, S.A.; Shadid, J.N.; Ng, K.T.; Nadeem, A.

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  19. Multistage and multiobjective formulations of globally optimal upgradable expansions for electric power distribution systems

    NASA Astrophysics Data System (ADS)

    Vaziri Yazdi Pin, Mohammad

    Electric power distribution systems are the last high voltage link in the chain of production, transport, and delivery of the electric energy, the fundamental goals of which are to supply the users' demand safely, reliably, and economically. The number circuit miles traversed by distribution feeders in the form of visible overhead or imbedded underground lines, far exceed those of all other bulk transport circuitry in the transmission system. Development and expansion of the distribution systems, similar to other systems, is directly proportional to the growth in demand and requires careful planning. While growth of electric demand has recently slowed through efforts in the area of energy management, the need for a continued expansion seems inevitable for the near future. Distribution system and expansions are also independent of current issues facing both the suppliers and the consumers of electrical energy. For example, deregulation, as an attempt to promote competition by giving more choices to the consumers, while it will impact the suppliers' planning strategies, it cannot limit the demand growth or the system expansion in the global sense. Curiously, despite presence of technological advancements and a 40-year history of contributions in the area, many of the major utilities still relay on experience and resort to rudimentary techniques when planning expansions. A comprehensive literature review of the contributions and careful analyses of the proposed algorithms for distribution expansion, confirmed that the problem is a complex, multistage and multiobjective problem for which a practical solution remains to be developed. In this research, based on the 15-year experience of a utility engineer, the practical expansion problem has been clearly defined and the existing deficiencies in the previous work identified and analyzed. The expansion problem has been formulated as a multistage planning problem in line with a natural course of development and industry

  20. Quantification of Emphysema: A Bullae Distribution Based Approach

    NASA Astrophysics Data System (ADS)

    Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki

    Computed tomography (CT)-based quantifications of emphysema encompass, and are not limited to, the ratio of the low-attenuation area, the bullae size, and the distribution of bullae in the lung. The standard CT-based emphysema describing indices include the mean lung density, the percentage of area of low attenuation [the pixel index (PI)] and the bullae index (BI). These standard emphysema describing indices are not expressive for describing the distribution of bullae in the lung. Consequently, the goal of this paper is to present a new emphysema describing index, the bullae congregation index (BCI), that describes whether bullae gather in a specific area of the lung and form a nearly single mass, and if so, how dense the mass of bullae is in the lung. BCI ranges from zero to ten corresponding to sparsely distributed bullae to densely distributed bullae. BCI is calculated based on the relative distance between every pair of bullae in the lung. The bullae pair distances are sorted into 200 distance classes. A smaller distance class corresponds to a closer proximity between the bullae. BCI is derived by calculating the percentage of the area of bullae in the lung that are separated by a certain distance class. Four bullae congregation classes are defined based on BCI. We evaluate BCI using 114 CT images that are hand-annotated by a radiologist into four bullae congregation classes. The average four-class classification accuracy of BCI is 88.21%. BCI correlates better than PI, BI and other standard statistical dispersion based methods with the radiological consensus-classified bullae congregation class.While BCI is not a specific index for indicating emphysema severity, it complements the existing set of emphysema describing indices to facilitate a more thorough knowledge about the emphysematous conditions in the lung. BCI is especially useful when it comes to comparing the distribution of bullae for cases with approximately the same PI, BI or PI and BI. BCI is easy